Category Archives: New Media

See Nobel prize winner’s (Kostya Novoselov) collaborative art/science video project on August 17, 2018 (Manchester, UK)

Dr. Konstantin (Kostya) Novoselov, one of the two scientists at the University of Manchester (UK) who were awarded Nobel prizes for their work with graphene, has embarked on an artistic career of sorts. From an August 8, 2018 news item on Nanowwerk,

Nobel prize-winning physicist Sir Kostya Novoselov worked with artist Mary Griffiths to create Prospect Planes – a video artwork resulting from months of scientific and artistic research and experimentation using graphene.

Prospect Planes will be unveiled as part of The Hexagon Experiment series of events at the Great Exhibition of the North 2018, Newcastle, on August 17 [2018].

An August 9, 2018 University of Manchester press release, which originated the news item (differences in the dates are likely due to timezones), describes the art/science project in some detail,

The fascinating video art project aims to shed light on graphene’s unique qualities and potential.

Providing a fascinating insight into scientific research into graphene, Prospect Planes began with a graphite drawing by Griffiths, symbolising the chemical element carbon.

This was replicated in graphene by Sir Kostya Novoselov, creating a microscopic 2D graphene version of Griffiths’ drawing just one atom thick and invisible to the naked eye.

They then used Raman spectroscopy to record a molecular fingerprint of the graphene image, using that fingerprint to map a digital visual representation of graphene’s unique qualities.

The six-part Hexagon Experiment series was inspired by the creativity of the Friday evening sessions that led to the isolation of graphene at The University of Manchester by Novoselov and Sir Andre Geim.

Mary Griffiths, has previously worked on other graphene artworks including From Seathwaite an installation in the National Graphene Institute, which depicts the story of graphite and graphene – its geography, geology and development in the North West of England.

Mary Griffiths, who is also Senior Curator at The Whitworth said: “Having previously worked alongside Kostya on other projects, I was aware of his passion for art. This has been a tremendously exciting and rewarding project, which will help people to better understand the unique qualities of graphene, while bringing Manchester’s passion for collaboration and creativity across the arts, industry and science to life.

“In many ways, the story of the scientific research which led to the creation of Prospect Planes is as exciting as the artwork itself. By taking my pencil drawing and patterning it in 2D with a single layer of graphene atoms, then creating an animated digital work of art from the graphene data, we hope to provoke further conversations about the nature of the first 2D material and the potential benefits and purposes of graphene.”

Sir Kostya Novoselov said: “In this particular collaboration with Mary, we merged two existing concepts to develop a new platform, which can result in multiple art projects. I really hope that we will continue working together to develop this platform even further.”

The Hexagon Experiment is taking place just a few months before the official launch of the £60m Graphene Engineering Innovation Centre, part of a major investment in 2D materials infrastructure across Manchester, cementing its reputation as Graphene City.

Prospect Planes was commissioned by Manchester-based creative music charity Brighter Sound.

The Hexagon Experiment is part of Both Sides Now – a three-year initiative to support, inspire and showcase women in music across the North of England, supported through Arts Council England’s Ambition for Excellence fund.

It took some searching but I’ve found the specific Hexagon event featuring Sir Novoselov’s and Mary Griffin’s work. From ‘The Hexagon Experiment #3: Adventures in Flatland’ webpage,

Lauren Laverne is joined by composer Sara Lowes and visual artist Mary Griffiths to discuss their experiments with music, art and science. Followed by a performance of Sara Lowes’ graphene-inspired composition Graphene Suite, and the unveiling of new graphene art by Mary Griffiths and Professor Kostya Novoselov. Alongside Andre Geim, Novoselov was awarded the Nobel Prize in Physics in 2010 for his groundbreaking experiments with graphene.


About The Hexagon Experiment

Music, art and science collide in an explosive celebration of women’s creativity

A six-part series of ‘Friday night experiments’ featuring live music, conversations and original commissions from pioneering women at the forefront of music, art and science.

Inspired by the creativity that led to the discovery of the Nobel-Prize winning ‘wonder material’ graphene, The Hexagon Experiment brings together the North’s most exciting musicians and scientists for six free events – from music made by robots to a spectacular tribute to an unsung heroine.

Presented by Brighter Sound and the National Graphene Institute at The University of Manchester, as part of the Great Exhibition of the North.

Buy tickets here.

One final comment, the title for the evening appears to have been inspired by a novella, from the Flatland Wikipedia entry (Note: Links have been removed),

Flatland: A Romance of Many Dimensions is a satirical novella by the English schoolmaster Edwin Abbott Abbott, first published in 1884 by Seeley & Co. of London.

Written pseudonymously by “A Square”,[1] the book used the fictional two-dimensional world of Flatland to comment on the hierarchy of Victorian culture, but the novella’s more enduring contribution is its examination of dimensions.[2]

That’s all folks.

ETA August 14, 2018: Not quite all. Hopefully this attempt to add a few details for people not familiar with graphene won’t lead increased confusion. The Hexagon event ‘Advetures in Flatland’ which includes Novoselov’s and Griffiths’ video project features some wordplay based on graphene’s two dimensional nature.

SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) and their art gallery from Aug. 12 – 16, 2018 (making the world synthetic) in Vancouver (Canada)

While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.

Introduction

For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),

Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.

Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.[1] The recent paper acceptance rate for SIGGRAPH has been less than 26%.[2] The submitted papers are peer-reviewed in a single-blind process.[3] There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress.[4][5] …

This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014.  The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,

While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.

“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”

SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”

That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.

Previews

I have four items about work that will be presented at SIGGRAPH 2018, First up, something about ‘redirected walking’ from a June 18, 2018 Association for Computing Machinery news release on EurekAlert,

CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.

All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.

“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”

Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.

The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.

The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”

The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.

Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.

###

About ACM, ACM SIGGRAPH, and SIGGRAPH 2018

ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.

They have provided an image illustrating what they mean (I don’t find it especially informative),

Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn

Next up (2), there’s Disney Corporation’s first virtual reality (VR) short, from a July 19, 2018  Association for Computing Machinery news release on EurekAlert,

Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.

Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.

“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”

For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.

SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.

“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”

This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”

Apparently this is a still from the ‘short’,

Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios

There’s also something (3) from Google as described in a July 26, 2018 Association of Computing Machinery news release on EurekAlert,

Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.

Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.

“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”

To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.

Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec

to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.

The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)

Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.

Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.

“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”

I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,

Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck

Finally (4), ‘synthesizing realistic sounds’ is announced in an Aug. 6, 2018 Stanford University (US) news release (also on EurekAlert) by Taylor Kubota,

Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.

“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”

The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.

“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”

Predicting sound

Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.

Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.

“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.

The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.

Challenges ahead

In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.

And, even in its current state, the results are worth the wait.

“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”

Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.

Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.

Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,

The researchers have also provided this image,

By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)

It does seem like we’re synthesizing the world around us, eh?

The SIGGRAPH 2018 art gallery

Here’s what SIGGRAPH had to say about its 2018 art gallery in Vancouver and the themes for the conference and the gallery (from a May 18, 2018 Associating for Computing Machinery news release on globalnewswire.com (also on this 2018 SIGGRAPH webpage),

SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.

The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.

Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”

He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”

Highlights from the 2018 Art Gallery include:

Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver

TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.

Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara

Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”

Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University

Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.

In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.

The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.

To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.

“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.

Art Papers highlights include:

Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth

This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.

Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong

The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.

Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University

“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.

What’s the what?

My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.

The role of empathy in science communication

Phys.org has a Dec. 12, 2016 essay by Nicole Miller-Struttmann on the topic of empathy and science communication,

Science communication remains as challenging as it is necessary in the era of big data. Scientists are encouraged to reach out to non-experts through social media, collaborations with citizen scientists, and non-technical abstracts. As a science enthusiast (and extrovert), I truly enjoy making these connections and having conversations that span expertise, interests and geographic barriers. However, recent divisive and impassioned responses to the surprising election results in the U.S. made me question how effective these approaches are for connecting with the public.

Are we all just stuck in our own echo chambers, ignoring those that disagree with us?

How do we break out of these silos to reach those that disengage from science or stop listening when we focus on evidence? Particularly evidence that is increasingly large in volume and in scale? Recent research suggests that a few key approaches might help: (1) managing our social media use with purpose, (2) tailoring outreach efforts to a distinct public, and (3) empathizing with our audience(s) in a deep, meaningful way.

The essay, which originally appeared on the PLOS Ecology Community blog in a Dec. 9, 2016 posting, goes on to discuss social media, citizen science/crowdsourcing, design thinking, and next gen data visualization (Note: Links have been removed),

Many of us attempt to broaden our impact by sharing interesting studies with friends, family, colleagues, and the broader public on social media. While the potential to interact directly with non-experts through social media is immense, confirmation bias (the tendency to interpret and share information that supports one’s existing beliefs) provides a significant barrier to reaching non-traditional and contrarian publics. Insights from network analyses suggest that these barriers can be overcome by managing our connections and crafting our messages carefully. …

Technology has revolutionized how the public engages in science, particularly data acquisition, interpretation and dissemination. The potential benefits of citizen science and crowd sourcing projects are immense, but there are significant challenges as well. Paramount among them is the reliance on “near-experts” and amateur scientists. Domroese and Johnson (2016) suggest that understanding what motivates citizen scientists to get involved – not what we think motivates them – is the first step to deepening their involvement and attracting diverse participants.

Design Thinking may provide a framework for reaching diverse and under-represented publics. While similar to scientific thinking in several ways,

design thinking includes a crucial step that scientific thinking does not: empathizing with your audience.

It requires that the designer put themselves in the shoes of their audience, understand what motivates them (as Domroese and Johnson suggest), consider how they will interact with and perceive the ‘product’, and appeal to the perspective. Yajima (2015) summarizes how design thinking can “catalyze scientific innovation” but also why it might be a strange fit for scientists. …

Connecting the public to big data is particularly challenging, as the data are often complex with multifaceted stories to tell. Recent work suggests that art-based, interactive displays are more effective at fostering understanding of complex issues, such as climate change.

Thomsen (2015) explains that by eliciting visceral responses and stimulating the imagination, interactive displays can deepen understanding and may elicit behavioral changes.

I recommend reading this piece in its entirety as Miller-Struttmann presents a more cohesive description of current science communication practices and ideas than is sometimes the case.

Final comment, I would like to add one suggestion and that’s the adoption of an attitude of ‘muscular’ empathy. People are going to disagree with you, sometimes quite strongly (aggressively), and it can be very difficult to maintain communication with people who don’t want (i.e., reject) the communication. Maintaining empathy in the face of failure and rejection which can extend for decades or longer requires a certain muscularity

Internet Archive backup in Canada?

It’s a good idea whether or not the backup site is in Canada and regardless of who is president of the United States, i.e., having a backup for the world’s digital memory. The Internet Archives has announced that it is raising funds to allow for the creation of a backup site. Here’s more from a Dec. 1, 2016 news item on phys.org,

The Internet Archive, which keeps historical records of Web pages, is creating a new backup center in Canada, citing concerns about surveillance following the US presidential election of Donald Trump.

“On November 9 in America, we woke up to a new administration promising radical change. It was a firm reminder that institutions like ours, built for the long term, need to design for change,” said a blog post from Brewster Kahle, founder and digital librarian at the organization.

“For us, it means keeping our cultural materials safe, private and perpetually accessible. It means preparing for a Web that may face greater restrictions.”

While Trump has announced no new digital policies, his campaign comments have raised concerns his administration would be more active on government surveillance and less sensitive to civil liberties.

Glyn Moody in a Nov. 30, 2016 posting on Techdirt eloquently describes the Internet Archive’s role (Note: Links have been removed),

The Internet Archive is probably the most important site that most people have never heard of, much less used. It is an amazing thing: not just a huge collection of freely-available digitized materials, but a backup copy of much of today’s Web, available through something known as the Wayback Machine. It gets its name from the fact that it lets visitors view snapshots of vast numbers of Web pages as they have changed over the last two decades since the Internet Archive was founded — some 279 billion pages currently. That feature makes it an indispensable — and generally unique — record of pages and information that have since disappeared, sometimes because somebody powerful found them inconvenient.

Even more eloquently, Brewster Kahle explains the initiative in his Nov. 29, 2016 posting on one of the Internet Archive blogs,

The history of libraries is one of loss.  The Library of Alexandria is best known for its disappearance.

Libraries like ours are susceptible to different fault lines:

Earthquakes,

Legal regimes,

Institutional failure.

So this year, we have set a new goal: to create a copy of Internet Archive’s digital collections in another country. We are building the Internet Archive of Canada because, to quote our friends at LOCKSS, “lots of copies keep stuff safe.” This project will cost millions. So this is the one time of the year I will ask you: please make a tax-deductible donation to help make sure the Internet Archive lasts forever. (FAQ on this effort).

Throughout history, libraries have fought against terrible violations of privacy—where people have been rounded up simply for what they read.  At the Internet Archive, we are fighting to protect our readers’ privacy in the digital world.

We can do this because we are independent, thanks to broad support from many of you. The Internet Archive is a non-profit library built on trust. Our mission: to give everyone access to all knowledge, forever. For free. The Internet Archive has only 150 staff but runs one of the top-250 websites in the world. Reader privacy is very important to us, so we don’t accept ads that track your behavior.  We don’t even collect your IP address. But we still need to pay for the increasing costs of servers, staff and rent.

You may not know this, but your support for the Internet Archive makes more than 3 million e-books available for free to millions of Open Library patrons around the world.

Your support has fueled the work of journalists who used our Political TV Ad Archive in their fact-checking of candidates’ claims.

It keeps the Wayback Machine going, saving 300 million Web pages each week, so no one will ever be able to change the past just because there is no digital record of it. The Web needs a memory, the ability to look back.

My two most relevant past posts on the topic of archives and memories are this May 18, 2012 piece about Luciana Duranti’s talk about authenticity and trust regarding digital documents and this March 8, 2012 posting about digital memory, which also features a mention of Brewster Kahle and the Internet Archives.

Curiosity Collider (Vancouver, Canada) presents Neural Constellations: Exploring Connectivity

I think of Curiosity Collider as an informal art/science  presenter but I gather the organizers’ ambitions are more grand. From the Curiosity Collider’s About Us page,

Curiosity Collider provides an inclusive community [emphasis mine] hub for curious innovators from any discipline. Our non-profit foundation, based in Vancouver, Canada, fosters participatory partnerships between science & technology, art & culture, business communities, and educational foundations to inspire new ways to experience science. The Collider’s growing community supports and promotes the daily relevance of science with our events and projects. Curiosity Collider is a catalyst for collaborations that seed and grow engaging science communication projects.

Be inspired by the curiosity of others. Our Curiosity Collider events cross disciplinary lines to promote creative inspiration. Meet scientists, visual and performing artists, culinary perfectionists, passionate educators, and entrepreneurs who share a curiosity for science.

Help us create curiosity for science. Spark curiosity in others with your own ideas and projects. Get in touch with us and use our curiosity events to showcase how your work creates innovative new ways to experience science.

I wish they hadn’t described themselves as an “inclusive community.” This often means exactly the opposite.

Take for example the website. The background is in black, the heads are white, and the text is grey. This is a website for people under the age of 40. If you want to be inclusive, you make your website legible for everyone.

That said, there’s an upcoming Curiosity Collider event which looks promising (from a July 20, 2016 email notice),

Neural Constellations: Exploring Connectivity

An Evening of Art, Science and Performance under the Dome

“We are made of star stuff,” Carl Sagan once said. From constellations to our nervous system, from stars to our neurons. We’re colliding neuroscience and astronomy with performance art, sound, dance, and animation for one amazing evening under the planetarium dome. Together, let’s explore similar patterns at the macro (astronomy) and micro (neurobiology) scale by taking a tour through both outer and inner space.

This show is curated by Curiosity Collider’s Creative Director Char Hoyt, along with Special Guest Curator Naila Kuhlmann, and developed in collaboration with the MacMillan Space Centre. There will also be an Art-Science silent auction to raise funding for future Curiosity Collider activities.

Participating performers include:

The July 20, 2016 notice also provides information about date, time, location, and cost,

When
7:30pm on Thursday, August 18th 2016. Join us for drinks and snacks when doors open at 6:30pm.

Where
H. R. MacMillan Space Centre (1100 Chestnut Street, Vancouver, BC)

Cost
$20.00 sliding scale. Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events. Curiosity Collider is a registered BC non-profit organization. Purchase tickets on our Eventbrite page.

Head to the Facebook event page: Let us know you are coming and share this event with others! We will also share event updates and performer profiles on the Facebook page.

There is a pretty poster,

CuriostiytCollider_AugEvent_NeuralConstellations

[downloaded from http://www.curiositycollider.org/events/]

Enjoy!

You have till June 30, 2016 to submit your NanoArt and/or art-science-technology paper

A June 9, 2016 news item on Nanotechnology Now features a call for submissions to a NanoArt festival,

The 4th International Festival of NanoArt An Art-Science-Technology special session will be hosted in Cluj-Napoca, Romania, by Babes-Bolyai University between September 8 – 14, 2016 in parallel with the 11th International Conference On Physics Of Advanced Materials (Nanomaterials).

The artworks will be shown in the Hall of Transilvania Philharmonic Cluj-Napoca (…). The exhibition is curated by artist and scientist Cris Orfescu, founder of NanoArt 21 and artist Mirela Suchea, PhD, researcher in the field of nanostructured materials synthesis. The previous editions of the festival were held in Finland, Germany, and Romania. For additional Information, visit: nanoart21.org/nanoart_festival.html

Call for Papers

An Art-Science-Technology special session will be held during the 11th International Conference on Physics of Advanced Materials (ICPAM 11) between 8th to 14th of September, 2016 at Babes-Bolyai University of Cluj-Napoca, Romania.

This session focuses on presentations (oral and poster) related to NanoArt, Scientific Photography (microphotography, bio, medical, space, environmental, etc.), Digital Art, Video Art, Computer Graphics, Computer Animation, Game Design, Interactive Art, Net Art, Fractal Art, Algorithmic Art, Virtual Reality, Math Art.

Abstract Submission – Deadline June 30th 2016. Authors are invited to submit a summary of no more than 2000 characters (including spaces) using Conference Online Management System (www.abstractcentral.ro). …

According to the submission page, there are a few more rules,

 

  • The presenting author must be a paid registrant.
  • The authors can choose the presentation form of the paper among oral presentation or poster presentation.
  • Members of the Advisory Board can decide to change the final presentation form of the proposed contribution.
  • Authors will be notified of acceptance and mode of presentation of their papers before August 15, 2016.

 

There is also a call for artworks, from the 4th International Festival of NanoArt webpage,

THE 4th INTERNATIONAL FESTIVAL OF NANOART
Open to All Artists and Scientists

Submission deadline July 15, 2016

The following are the general directions for artwork submission. For selection, the artists can send for free up to 5 images in .jpeg format at a resolution of 72 dpi with the longest dimension of maximum 800 pixels. Each image should be sent with an entry form (click to download). The image(s) and entry form(s) should be sent by e-mail to info@nanoart21.org no later than July 15, 2016. The artists will receive the acceptance email by July 30, 2016 and they should submit high resolution (300dpi) .jpeg files of the accepted works in A3 format size (29.7cm x 42cm or 11.69in x 16.54in) no later than July 30, 2016. The selected images will be sent in digital format to the host venue where they will be printed, matted, and framed. The cost TBA should be paid (see ‘Buy Now‘ button bellow) at the time when artists send the high resolution files. After the event, the works may be exhibited in different venues for continuing education. A travel exhibition to different venues is always a possibility. If artists would like to have their print, they will have to pay for handling and shipping.

The festival will be promoted on different venues online, nanoart21.org contacts, social media, word-of-mouth. The artists could also promote the competition on their websites and other venues. All selected artworks will be shown in a multimedia presentation on the nanoart21.org festival’s page.

Copyright of entered artworks remains with the artist who agrees by submitting his/her works to grant permission to nanoart21.org and Cristian Orfescu to use the submitted material in exhibits, on the nanoart21.org web site, and other media for marketing and printing for off line marketing. Your permission to display the entry for the festival and later online and in the archives cannot be reversed and its use or removal is entirely at the discretion of nanoart21.org.

After the artworks have been accepted for the festival, the artists can pay 20 Euro/artwork for printing, framing, matting, and exhibition by … .

Good luck with your submissions.

Next Horizons: Electronic Literature Organization (ELO) 2016
 conference in Victoria, BC

The Electronic Literature Organization (ELO; based at the Massachusetts Institute of Technology [MIT]) is holding its annual conference themed Next Horizons (from an Oct. 12, 2015 post on the ELO blog) at the University of Victoria on Vancouver Island, British Columbia from June 10 – June 12, 2016.

You can get a better sense of what it’s all about by looking at the conference schedule/programme,

Friday, June 10, 2016

8:00 a.m.–5:00 p.m.: Registration
MacLaurin Lobby A100

8:00 a.m.-10:00 a.m: Breakfast
Sponsored by Bloomsbury Academic

10:00 a.m.-10:30: Welcome
MacLaurin David Lam Auditorium A 144
Speakers: Dene Grigar & Ray Siemens

10:30-12 noon: Featured Papers
MacLaurin David Lam Auditorium A 144
Chair: Alexandra Saum-Pascual, UC Berkeley

  • Stuart Moulthrop, “Intimate Mechanics: Play and Meaning in the Middle of Electronic Literature”
  • Anastasia Salter, “Code before Content? Brogrammer Culture in Games and Electronic Literature”

12 Noon-1:45 p.m.  Gallery Opening & Lunch Reception
MacLaurin Lobby A 100
Kick off event in celebration of e-lit works
A complete list of artists featured in the Exhibit

1:45-3:00: Keynote Session
MacLaurin David Lam Auditorium A 144
“Prototyping Resistance: Wargame Narrative and Inclusive Feminist Discourse”

  • Jon Saklofske, Acadia University
  • Anastasia Salter, University of Central Florida
  • Liz Losh, College of William and Mary
  • Diane Jakacki, Bucknell University
  • Stephanie Boluk, UC Davis

3:00-3:15: Break

3:15-4:45: Concurrent Session 1

Session 1.1: Best Practices for Archiving E-Lit
MacLaurin D010
Roundtable
Chair: Dene Grigar, Washington State University Vancouver

  • Dene Grigar, Washington State University Vancouver
  • Stuart Moulthrop, University of Wisconsin Milwaukee
  • Matthew Kirschenbaum, University of Maryland College Park
  • Judy Malloy, Independent Artist

Session 1.2: Medium & Meaning
MacLaurin D110
Chair: Rui Torres, University Fernando Pessoa

  • “From eLit to pLit,” Heiko Zimmerman, University of Trier
  • “Generations of Meaning,” Hannah Ackermans, Utrecht University
  • “Co-Designing DUST,” Kari Kraus, University of Maryland College Park

Session 1.3: A Critical Look at E-Lit
MacLaurin D105
Chair: Philippe Brand, Lewis & Clark College

  • “Methods of Interrogation,” John Murray, University of California Santa Cruz
  • “Peering through the Window,” Philippe Brand, Lewis & Clark College
  • “(E-)re-writing Well-Known Works,” Agnieszka Przybyszewska, University of Lodz

Session 1.4: Literary Games
MacLaurin D109
Chair: Alex Mitchell, National University of Singapore

  • “Twine Games,” Alanna Bartolini, UC Santa Barbara
  • “Whose Game Is It Anyway?,” Ryan House, Washington State University Vancouver
  • “Micronarratives Dynamics in the Structure of an Open-World Action-Adventure Game,” Natalie Funk, Simon Fraser University

Session 1.5: eLit and the (Next) Future of Cinema
MacLaurin D107
Roundtable
Chair: Steven Wingate, South Dakota State University

  • Steve Wingate, South Dakota State University
  • Kate Armstrong, Emily Carr University
  • Samantha Gorman, USC

Session 1.6: Authors & Texts
MacLaurin D101
Chair: Robert Glick, Rochester Institute of Technology

  • “Generative Poems by Maria Mencia,” Angelica Huizar, Old Dominion University
  • “Inhabitation: Johanna Drucker: “no file is ever self-identical,” Joel Kateinikoff, University of Alberta
  • “The Great Monster: Ulises Carrión as E-Lit Theorist,” Élika Ortega, University of Kansas
  • “Pedagogic Strategies for Electronic Literature,” Mia Zamora, Kean University

3:15-4:45: Action Session Day 1
MacLaurin D111

  • Digital Preservation, by Nicholas Schiller, Washington State University Vancouver; Zach Coble, NYU
  • ELMCIP, Scott Rettberg and Álvaro Seiça, University of Bergen; Hannah Ackermans, Utrecht University
  • Wikipedia-A-Thon, Liz Losh, College of William and Mary

5:00-6:00: Reception and Poster Session
University of Victoria Faculty Club
For ELO, DHSI, & INKE Participants, featuring these artists and scholars from the ELO:

  • “Social Media for E-Lit Authors,” Michael Rabby, Washington State University Vancouver
  • “– O True Apothecary!, by Kyle Booten,” UC Berkeley, Center for New Media
  • “Life Experience through Digital Simulation Narratives,” David Núñez Ruiz, Neotipo
  • “Building Stories,” Kate Palermini, Washington State University Vancouver
  • “Help Wanted and Skills Offered,” by Deena Larsen, Independent Artist; Julianne Chatelain, U.S. Bureau of Reclamation
  • “Beyond Original E-Lit: Deconstructing Austen Cybertexts,” Meredith Dabek, Maynooth University
  • Arabic E-Lit. (AEL) Project, Riham Hosny, Rochester Institute of Technology/Minia University
  • “Poetic Machines,” Sidse Rubens LeFevre, University of Copenhagen
  • “Meta for Meta’s Sake,” Melinda White

 

7:30-11:00: Readings & Performances at Felicita’s
A complete list of artists featured in the event

Saturday, June 11, 2016

 

8:30-10:00: Lightning Round
MacLaurin David Lam Auditorium A 144
Chair: James O’Sullivan, University of Sheffield

  • “Different Tools but Similar Wits,” Guangxu Zhao, University of Ottawa
  • “Digital Aesthetics,” Bertrand Gervais, Université du Québec à Montréal
  • “Hatsune Miku,” Roman Kalinovski, Independent Scholar
  • “Meta for Meta’s Sake,” Melinda White, University of New Hampshire
  • “Narrative Texture,” Luciane Maria Fadel, Simon Fraser University
  • “Natural Language Generation,” by Stefan Muller Arisona
  • “Poetic Machines,” Sidse Rubens LeFevre, University of Copenhagen
  • “Really Really Long Works,” Aden Evens, Dartmouth University
  • “UnWrapping the E-Reader,” David Roh, University of Utah
  • “Social Media for E-Lit Artists,” Michael Rabby

10:00: Gallery exhibit opens
MacLaurin A100
A complete list of artists featured in the Exhibit

10:30-12 noon: Concurrent Session 2

Session 2.1: Literary Interventions
MacLaurin D101
Brian Ganter, Capilano College

  • “Glitching the Poem,” Aaron Angello, University of Colorado Boulder
  • “WALLPAPER,” Alice Bell, Sheffield Hallam University; Astrid Ensslin, University of Alberta
  • “Unprintable Books,” Kate Pullinger [emphasis mine], Bath Spa University

Session 2.2: Theoretical Underpinnings
MacLaurin D105
Chair: Mia Zamora, Kean University

  • “Transmediation,” Kedrick James, University of British Columbia; Ernesto Pena, University of British Columbia
  • “The Closed World, Databased Narrative, and Network Effect,” Mark Sample, Davidson College
  • “The Cyborg of the House,” Maria Goicoechea, Universidad Complutense de Madrid

Session 2.3: E-Lit in Time and Space
MacLaurin D107
Chair: Andrew Klobucar, New Jersey Institute of Technology

  • “Electronic Literary Artifacts,” John Barber, Washington State University Vancouver; Alcina Cortez, INET-MD, Instituto de Etnomusicologia, Música e Dança
  • “The Old in the Arms of the New,” Gary Barwin, Independent Scholar
  • “Space as a Meaningful Dimension,” Luciane Maria Fadel, Simon Fraser University

Session 2.4: Understanding Bots
MacLaurin D110
Roundtable
Chair: Leonardo Flores, University of Puerto Rico, Mayagüez

  • Allison Parrish, Fordham University
  • Matt Schneider, University of Toronto
  • Tobi Hahn, Paisley Games
  • Zach Whalen, University of Mary Washington

10:30-12 noon: Action Session Day 2
MacLaurin D111

  • Digital Preservation, by Nicholas Schiller, Washington State University Vancouver; Zach Coble, NYU
  • ELMCIP, Allison Parrish, Fordham University; Scott Rettberg, University of Bergen; David Nunez Ruiz, Neotipo; Hannah Ackermans, Utrecht University
  • Wikipedia-A-Thon, Liz Losh, College of William and Mary

12:15-1:15: Artists Talks & Lunch
David Lam Auditorium MacLaurin A144

  • “The Listeners,” by John Cayley
  • “The ChessBard and 3D Poetry Project as Translational Ecosystems,” Aaron Tucker, Ryerson University
  • “News Wheel,” Jody Zellen, Independent Artist
  • “x-o-x-o-x.com,” Erik Zepka, Independent Artist

1:30-3:00: Concurrent Session 3

Session 3.1: E-Lit Pedagogy in Global Setting
MacLaurin D111
Roundtable
Co-Chairs: Philippe Bootz, Université Paris 8; Riham Hosny, Rochester Institute of Technology/Minia University

  • Sandy Baldwin, Rochester Institute of Technology
  • Maria Goicoechea, Universidad Complutense de Madrid
  • Odile Farge, UNESCO Chair ITEN, Foundation MSH/University of Paris8.

Session 3.2: The Art of Computational Media
MacLaurin D109
Chair: Rui Torres, University Fernando Pessoa

  • “Creative GREP Works,” Kristopher Purzycki, University of Wisconsin Milwaukee
  • “Using Theme to Author Hypertext Fiction,” Alex Mitchell, National University at Singapore

Session 3.3: Present Future Past
MacLaurin D110
Chair: David Roh, University of Utah

  • “Exploring Potentiality,” Daniela Côrtes Maduro, Universität Bremen
  • “Programming the Kafkaesque Mechanism,” by Kristof Anetta, Slovak Academy of Sciences
  • “Reapprasing Word Processing,” Matthew Kirschenbaum, University of Maryland College Park

Session 3.4: Beyond Collaborative Horizons
MacLaurin D010
Panel
Chair: Jeremy Douglass, UC Santa Barbara

  • Jeremy Douglass, UC Santa Barbara
  • Mark Marino, USC
  • Jessica Pressman, San Diego State University

Session 3.5: E-Loops: Reshuffling Reading & Writing In Electronic Literature Works
MacLaurin D105
Panel
Chair: Gwen Le Cor, Université Paris 8

  • “The Plastic Space of E-loops and Loopholes: the Figural Dynamics of Reading,” Gwen Le Cor, Université Paris 8
  • “Beyond the Cybernetic Loop: Redrawing the Boundaries of E-Lit Translation,” Arnaud Regnauld, Université Paris 8
  • “E-Loops: The Possible and Variable Figure of a Contemporary Aesthetic,” Ariane Savoie, Université du Québec à Montréal and Université Catholique de Louvain
  • “Relocating the Digital,” Stéphane Vanderhaeghe, Université Paris 8

Session 3.6: Metaphorical Perspectives
MacLaurin D107
Chair: Alexandra Saum-Pascual, UC Berkeley

  • “Street Ghosts,” Ali Rachel Pearl, USC
  • “The (Wo)men’s Social Club,” Amber Strother, Washington State University Vancouver
Session 3.7: Embracing Bots
MacLaurin D101

Roundtable
Zach Whalen, Chair

  • Leonardo Flores, University of Puerto Rico Mayagüez Campus
  • Chris Rodley, University of Sydney
  • Élika Ortega, University of Kansas
  • Katie Rose Pipkin, Carnegie Mellon

1:30-3:30: Workshops
MacLaurin D115

  • “Bots,” Zach Whalen, University of Mary Washington
  • “Twine”
  • “AR/VR,” John Murray, UC Santa Cruz
  • “Unity 3D,” Stefan Muller Arisona, University of Applied Sciences and Arts Northwestern; Simon Schubiger, University of Applied Sciences and Arts Northwestern
  • “Exploratory Programming,” Nick Montfort, MIT
  • “Scalar,” Hannah Ackermans, University of Utrecht
  • The Electronic Poet’s Workbench: Build a Generative Writing Practice, Andrew Koblucar, New Jersey Institute of Technology; David Ayre, Programmer and Independent Artist

3:30-5:00: Keynote

Christine Wilks [emphasis mine], “Interactive Narrative and the Art of Steering Through Possible Worlds”
MacLaurin David Lam Auditorium A144

Wilks is British digital writer, artist and developer of playable stories. Her digital fiction, Underbelly, won the New Media Writing Prize 2010 and the MaMSIE Digital Media Competition 2011. Her work is published in online journals and anthologies, including the Electronic Literature Collection, Volume 2 and the ELMCIP Anthology of European Electronic Literature, and has been presented at international festivals, exhibitions and conferences. She is currently doing a practice-based PhD in Digital Writing at Bath Spa University and is also Creative Director of e-learning specialists, Make It Happen.

5:15-6:45: Screenings at Cinecenta
A complete list of artists featured in the Screenings

7:00-9:00: Banquet (a dance follows)
University of Victoria Faculty Club

Sunday, June 12, 2016

 

8:30-10:00: Town Hall
MacLaurin David Lam Auditorium D144

10:00: Gallery exhibit opens
MacLaurin A100
A complete list of artists featured in the Exhibit

10:30-12 p.m.: Concurrent Session 4

Session 4.1: Narratives & Narrativity
MacLaurin D110
Chair: Kendrick James, University of British Columbia

  • “Narrativity in Virtual Reality,” Illya Szilak, Independent Scholar
  • “Simulation Studies,” David Ciccoricco, University of Otago
  • “Future Fiction Storytelling Machines,” Caitlin Fisher, York University

Session 4.2: Historical & Critical Perspectives
MacLaurin D101
Chair: Robert Glick, Rochester Institute of Technology

  • “The Evolution of E-Lit,” James O’Sullivan, University of Sheffield
  • “The Logic of Selection,” by Matti Kangaskoski, Helsinki University

Session 4.3: Emergent Media
MacLaurin D107
Alexandra Saum-Pascual, UC Berkeley

  • Seasons II:  a case study in Ambient Video, Generative Art, and Audiovisual Experience,” Jim Bizzocchi, Simon Fraser University; Arne Eigenfeldt, Simon Fraser University; Philippe Pasquier, Simon Fraser University; Miles Thorogood, Simon Fraser University
  • “Cinematic Turns,” Liz Losh, College of William and Mary
  • “Mario Mods and Ludic Seriality,” Shane Denson, Duke University

Session 4.4: The E-Literary Object
MacLaurin D109
Chair: Deena Larsen, Independent Artist

  • “How E-Literary Is My E-Literature?,” by Leonardo Flores, University of Puerto Rico Mayagüez Campus
  • “Overcoming the Locative Interface Fallacy,” by Lauren Burr, University of Waterloo
  • “Interactive Narratives on the Block,” Aynur Kadir, Simon Fraser University

Session 4.5: Next Narrative
MacLaurin D010
Panel
Chair: Marjorie Luesebrink

  • Marjorie Luesebrink, Independent Artist
  • Daniel Punday, Independent Artist
  • Will Luers, Washington State University Vancouver

10:30-12 p.m.: Action Session Day 3
MacLaurin D111

  • Digital Preservation, by Nicholas Schiller, Washington State University Vancouver; Zach Coble, NYU
  • ELMCIP, Allison Parrish, Fordham University; Scott Rettberg, University of Bergen; David Nunez Ruiz, Neotipo; Hannah Ackermans, Utrecht University
  • Wikipedia-A-Thon, Liz Losh, College of William and Mary

12:15-1:30: Artists Talks & Lunch
David Lam Auditorium A144

  • “Just for the Cameras,” Flourish Klink, Independent Artist
  • “Lulu Sweet,” Deanne Achong and Faith Moosang, Independent Artists
  • “Drone Pilot,” Ian Hatcher, Independent Artist
  • “AVATAR/MOCAP,” Alan Sondheim, Independent Artist

1:30-3:00 : Concurrent Session 5

Session 5.1: Subversive Texts
MacLaurin D101
Chair: Michael Rabby, Washington State University Vancouver

  • “E-Lit Jazz,” Sandy Baldwin, Rochester Institute of Technology; Rui Torres, University Fernando Pessoa
  • “Pop Subversion in Electronic Literature,” Davin Heckman, Winona State University
  • “E-Lit in Arabic Universities,” Riham Hosny, Rochester Institute of Technology/Minia University

Session 5.2: Experiments in #NetProv & Participatory Narratives
MacLaurin D109
Roundtable
Chair: Mia Zamora, Kean University

  • Mark Marino, USC
  • Rob Wittig, Meanwhile… Netprov Studio
  • Mia Zamora, Kean University

Session 5.3: Emergent Media
MacLaurin D105
Chair: Andrew Klobucar, New Jersey Institute of Technology

  • “Migrating Electronic Literature to the Kinect System,” Monika Gorska-Olesinka, University of Opole
  • “Mobile and Tactile Screens as Venues for the Performing Arts?,” Serge Bouchardon, Sorbonne Universités, Université de Technologie de Compiègne
  • “The Unquantified Self: Imagining Ethopoiesis in the Cognitive Era,” Andrew Klobucar, New Jersey Institute of Technology

Session 5.4: E-Lit Labs
MacLaurin D010
Chair: Jim Brown, Rutgers University Camden

  • Jim Brown, Rutgers University Camden
  • Robert Emmons, Rutgers University Camden
  • Brian Greenspan, Carleton University
  • Stephanie Boluk, UC Davis
  • Patrick LeMieux, UC Davis

Session 5.5: Transmedia Publishing
MacLaurin D107
Roundtable
Chair: Philippe Bootz

  • Philippe Bootz, Université Paris 8
  • Lucile Haute, Université Paris 8
  • Nolwenn Trehondart, Université Paris 8
  • Steve Wingate, South Dakota State University

Session 5.6: Feminist Horizons
MacLaurin D110
Panel
Moderator: Anastasia Salter, University of Central Florida

  • Kathi Inman Berens, Portland State University
  • Jessica Pressman, San Diego State University
  • Caitlin Fisher, York University

3:30-5:00: Closing Session
David Lam Auditorium MacLaurin A144
Chairs: John Cayley, Brown University; Dene Grigar, President, ELO

  • “Platforms and Genres of Electronic Literature,” Scott Rettberg, University of Bergen
  • “Emergent Story Structures,” David Meurer. York University
  • “We Must Go Deeper,” Samantha Gorman, USC; Milan Koerner-Safrata, Recon Instruments

I’ve bolded two names: Christine Wilks, one of two conference keynote speakers, who completed her MA in the same cohort as mine in De Montfort University’s Creative Writing and New Media master’s program. Congratulations on being a keynote speaker, Christine! The other name belongs to Kate Pullinger who was one of two readers for that same MA programme. Since those days, Pullinger has won a Governor General’s award for her fiction, “The Mistress of Nothing,” and become a professor at the University of Bath Spa (UK).

Registration appears to be open.

5D data storage is forever

Combine nanostructured glass and femtosecond laser writing with five-dimensional digital data and you can wave goodbye to any anxieties about losing information. Researchers at Southampton University (UK) made the announcement in a Feb. 15, 2016 news item on ScienceDaily,

Scientists at the University of Southampton have made a major step forward in the development of digital data storage that is capable of surviving for billions of years.

Using nanostructured glass, scientists from the University’s Optoelectronics Research Centre (ORC) have developed the recording and retrieval processes of five dimensional (5D) digital data by femtosecond laser writing.

A Feb. 15, 2016 University of Southampton press release (also on EurekAlert), which originated the news item, offers more detail,

The storage allows unprecedented properties including 360 TB [Terabyte]/disc data capacity, thermal stability up to 1,000°C and virtually unlimited lifetime at room temperature (13.8 billion years at 190°C ) opening a new era of eternal data archiving. As a very stable and safe form of portable memory, the technology could be highly useful for organisations with big archives, such as national archives, museums and libraries, to preserve their information and records.

The technology was first experimentally demonstrated in 2013 when a 300 kb [kilobit] digital copy of a text file was successfully recorded in 5D.

Now, major documents from human history such as [the] Universal Declaration of Human Rights (UDHR), Newton’s Opticks, Magna Carta and Kings [sic] James Bible, have been saved as digital copies that could survive the human race. A copy of the UDHR encoded to 5D data storage was recently presented to UNESCO by the ORC at the International Year of Light (IYL) closing ceremony in Mexico.

The documents were recorded using ultrafast laser, producing extremely short and intense pulses of light. The file is written in three layers of nanostructured dots separated by five micrometres (one millionth of a metre).

The self-assembled nanostructures change the way light travels through glass, modifying polarisation of light that can then be read by combination of optical microscope and a polariser, similar to that found in Polaroid sunglasses.

Coined as the ‘Superman memory crystal’, as the glass memory has been compared to the “memory crystals” used in the Superman films, the data is recorded via self-assembled nanostructures created in fused quartz. The information encoding is realised in five dimensions: the size and orientation in addition to the three dimensional position of these nanostructures.

Professor Peter Kazansky, from the ORC, says: “It is thrilling to think that we have created the technology to preserve documents and information and store it in space for future generations. This technology can secure the last evidence of our civilisation: all we’ve learnt will not be forgotten.”

The researchers will present their research at the photonics industry’s renowned SPIE—The International Society for Optical Engineering Conference in San Francisco, USA this week. The invited paper, ‘5D Data Storage by Ultrafast Laser Writing in Glass’ will be presented on Wednesday 17 February [2016].

The team are now looking for industry partners to further develop and commercialise this ground-breaking new technology.

I have written a number of pieces about digitization, data storage, and memory such as this Jan. 30, 2014 post titled, Does digitizing material mean it’s safe? A tale of Canada’s Fisheries and Oceans scientific libraries. If you scroll down about 50% of the way, you’ll find some material that provides an overview.

Universal Declaration of Human Rights recorded into 5D optical data

Universal Declaration of Human Rights recorded into 5D optical data

 

An open science policy platform for Europe and a technology programme for the arts community

Thanks to David Bruggeman’s Dec. 8, 2015 posting on his Pasco Phronesis blog, I’ve gotten some details about the European Union’s (EU) Open Science Policy Platform and about a science, technology and arts programme to connect artists with scientists (Note: Links have been removed),

Recently the European Commission’s [EC] Directorate-General for Research and Development announced the development of an Open Science Policy Platform.  In the European Commission context, Open Science is one of its Digital Government initiatives, but this Policy Platform is not technical infrastructure.  It is a communications mechanism for stakeholders in open access, new digital tools for research and joint arts and research communities.

David goes on to contrast the open science situation in the US with the approach being taken in the EU. Unfortunately, I do not have  sufficient knowledge of the Canadian open science scene to offer any opinion.

Getting back to Europe, there is some sort of a government document from the EC’s Directorate-General for Research and Innovation (RTD [Research and Technological Development]) titled, New policy initiative: The establishment of an Open Science Policy Platform,

The Open Science Policy Platform will be governed by a Steering Group composed of top-leading individuals of (European) branch organisations with the required decision-power. DG RTD will seek to appoint individuals from the following stakeholder groups:

-universities;
-academies of science;
-research funding bodies;
-research performing organisations;
-Citizen Science;
-scientific publication associations;
-Open Science platforms and intermediaries;
-(research) libraries.

The Open Science Policy Platform will advise the Commission on the development and implementation  of open science policy on the basis of the draft European Open Science Agenda.

The steering group for this platform will be set up in early 2016 according to the undated document describing this new policy initiative.

Regarding the arts project mentioned earlier, it’s part of the European Union’s Digital Agenda for Europe, from the ICT (information and communication technology) and art – the StARTS platform webpage on the European Commission’s website,

Scientific and technological skills are not the only forces driving innovation. Creativity and the involvement of society play a major role in the innovation process and its endorsement by all. In this context, the Arts serve as catalysts in an efficient conversion of Science and Technology knowledge into novel products, services, and processes.

ICT can enhance our capacity to sense the world, but an artwork can reach audiences on intrinsic emotional levels.

The constant appropriation of new technologies by artists allows them to go further in actively participating in society. By using ICT as their medium of expression, artists are able to prototype solutions, create new products and make new economic, social and business models. Additionally, by using traditional mediums of expression and considering the potentials of ICT, they propose new approaches to research and education.

The European Commission recognised this by launching the Starts programme: Innovation at the nexus of Science, Technology and the Arts  (Starts) to foster the emergence of joint arts and research communities. It supported the ICT Art Connect study which lead the way to the StARTS initiative by revealing new evidence for the integration of the Arts as an essential and fruitful component within research and innovation in ICT.

A Call for a Coordination and support action (CSA) has been launched to boost synergies between artists, creative people and technologists under Horizon 2020 Work Programme 2016/17.

You can find out more on events that are taking place throughout Europe. Follow StARTS on Facebook or via #StartsEU.

You can find the Starts website here.

Performances Tom Hanks never gave

The answer to the question, “What makes Tom Hanks look like Tom  Hanks?” leads to machine learning and algorithms according to a Dec. 7, 2015 University of Washington University news release (also on EurekAlert) Note: Link have been removed,

Tom Hanks has appeared in many acting roles over the years, playing young and old, smart and simple. Yet we always recognize him as Tom Hanks.

Why? Is it his appearance? His mannerisms? The way he moves?

University of Washington researchers have demonstrated that it’s possible for machine learning algorithms to capture the “persona” and create a digital model of a well-photographed person like Tom Hanks from the vast number of images of them available on the Internet.

With enough visual data to mine, the algorithms can also animate the digital model of Tom Hanks to deliver speeches that the real actor never performed.

“One answer to what makes Tom Hanks look like Tom Hanks can be demonstrated with a computer system that imitates what Tom Hanks will do,” said lead author Supasorn Suwajanakorn, a UW graduate student in computer science and engineering.

As for the performances Tom Hanks never gave, the news release offers more detail,

The technology relies on advances in 3-D face reconstruction, tracking, alignment, multi-texture modeling and puppeteering that have been developed over the last five years by a research group led by UW assistant professor of computer science and engineering Ira Kemelmacher-Shlizerman. The new results will be presented in a paper at the International Conference on Computer Vision in Chile on Dec. 16.

The team’s latest advances include the ability to transfer expressions and the way a particular person speaks onto the face of someone else — for instance, mapping former president George W. Bush’s mannerisms onto the faces of other politicians and celebrities.

Here’s a video demonstrating how former President Bush’s speech and mannerisms have mapped onto other famous faces including Hanks’s,

The research team has future plans for this technology (from the news release)

It’s one step toward a grand goal shared by the UW computer vision researchers: creating fully interactive, three-dimensional digital personas from family photo albums and videos, historic collections or other existing visuals.

As virtual and augmented reality technologies develop, they envision using family photographs and videos to create an interactive model of a relative living overseas or a far-away grandparent, rather than simply Skyping in two dimensions.

“You might one day be able to put on a pair of augmented reality glasses and there is a 3-D model of your mother on the couch,” said senior author Kemelmacher-Shlizerman. “Such technology doesn’t exist yet — the display technology is moving forward really fast — but how do you actually re-create your mother in three dimensions?”

One day the reconstruction technology could be taken a step further, researchers say.

“Imagine being able to have a conversation with anyone you can’t actually get to meet in person — LeBron James, Barack Obama, Charlie Chaplin — and interact with them,” said co-author Steve Seitz, UW professor of computer science and engineering. “We’re trying to get there through a series of research steps. One of the true tests is can you have them say things that they didn’t say but it still feels like them? This paper is demonstrating that ability.”

Existing technologies to create detailed three-dimensional holograms or digital movie characters like Benjamin Button often rely on bringing a person into an elaborate studio. They painstakingly capture every angle of the person and the way they move — something that can’t be done in a living room.

Other approaches still require a person to be scanned by a camera to create basic avatars for video games or other virtual environments. But the UW computer vision experts wanted to digitally reconstruct a person based solely on a random collection of existing images.

To reconstruct celebrities like Tom Hanks, Barack Obama and Daniel Craig, the machine learning algorithms mined a minimum of 200 Internet images taken over time in various scenarios and poses — a process known as learning ‘in the wild.’

“We asked, ‘Can you take Internet photos or your personal photo collection and animate a model without having that person interact with a camera?'” said Kemelmacher-Shlizerman. “Over the years we created algorithms that work with this kind of unconstrained data, which is a big deal.”

Suwajanakorn more recently developed techniques to capture expression-dependent textures — small differences that occur when a person smiles or looks puzzled or moves his or her mouth, for example.

By manipulating the lighting conditions across different photographs, he developed a new approach to densely map the differences from one person’s features and expressions onto another person’s face. That breakthrough enables the team to ‘control’ the digital model with a video of another person, and could potentially enable a host of new animation and virtual reality applications.

“How do you map one person’s performance onto someone else’s face without losing their identity?” said Seitz. “That’s one of the more interesting aspects of this work. We’ve shown you can have George Bush’s expressions and mouth and movements, but it still looks like George Clooney.”

Here’s a link to and a citation for the paper presented at the conference in Chile,

What Makes Tom Hanks Look Like Tom Hanks by Supasorn Suwajanakorn, Steven M. Seitz, Ira Kemelmacher-Shlizerman for the 2015 ICCV conference, Dec. 13 – 15, 2015 in Chile.

You can find out more about the conference here.