For some reason it took a lot longer than usual to find this research paper despite having the journal (Nature Communications), the title (Spontaneous formation …), and the authors’ names. Thankfully, success was wrested from the jaws of defeat (I don’t care if that is trite; it’s how I felt) and links, etc. follow at the end as usual.
An experiment that, by design, was not supposed to turn up anything of note instead produced a “bewildering” surprise, according to the Stanford scientists who made the discovery: a new way of creating gold nanoparticles and nanowires using water droplets.
The technique, detailed April 19  in the journal Nature Communications, is the latest discovery in the new field of on-droplet chemistry and could lead to more environmentally friendly ways to produce nanoparticles of gold and other metals, said study leader Richard Zare, a chemist in the School of Humanities and Sciences and a co-founder of Stanford Bio-X.
“Being able to do reactions in water means you don’t have to worry about contamination. It’s green chemistry,” said Zare, who is the Marguerite Blake Wilbur Professor in Natural Science at Stanford.
Gold is known as a noble metal because it is relatively unreactive. Unlike base metals such as nickel and copper, gold is resistant to corrosion and oxidation, which is one reason it is such a popular metal for jewelry.
Around the mid-1980s, however, scientists discovered that gold’s chemical aloofness only manifests at large, or macroscopic, scales. At the nanometer scale, gold particles are very chemically reactive and make excellent catalysts. Today, gold nanostructures have found a role in a wide variety of applications, including bio-imaging, drug delivery, toxic gas detection and biosensors.
Until now, however, the only reliable way to make gold nanoparticles was to combine the gold precursor chloroauric acid with a reducing agent such as sodium borohydride.
The reaction transfers electrons from the reducing agent to the chloroauric acid, liberating gold atoms in the process. Depending on how the gold atoms then clump together, they can form nano-size beads, wires, rods, prisms and more.
A spritz of gold
Recently, Zare and his colleagues wondered whether this gold-producing reaction would proceed any differently with tiny, micron-size droplets of chloroauric acid and sodium borohydide. How large is a microdroplet? “It is like squeezing a perfume bottle and out spritzes a mist of microdroplets,” Zare said.
From previous experiments, the scientists knew that some chemical reactions proceed much faster in microdroplets than in larger solution volumes.
Indeed, the team observed that gold nanoparticle grew over 100,000 times faster in microdroplets. However, the most striking observation came while running a control experiment in which they replaced the reducing agent – which ordinarily releases the gold particles – with microdroplets of water.
“Much to our bewilderment, we found that gold nanostructures could be made without any added reducing agents,” said study first author Jae Kyoo Lee, a research associate.
Viewed under an electron microscope, the gold nanoparticles and nanowires appear fused together like berry clusters on a branch.
The surprise finding means that pure water microdroplets can serve as microreactors for the production of gold nanostructures. “This is yet more evidence that reactions in water droplets can be fundamentally different from those in bulk water,” said study coauthor Devleena Samanta, a former graduate student in Zare’s lab and co-author on the paper.
If the process can be scaled up, it could eliminate the need for potentially toxic reducing agents that have harmful health side effects or that can pollute waterways, Zare said.
It’s still unclear why water microdroplets are able to replace a reducing agent in this reaction. One possibility is that transforming the water into microdroplets greatly increases its surface area, creating the opportunity for a strong electric field to form at the air-water interface, which may promote the formation of gold nanoparticles and nanowires.
“The surface area atop a one-liter beaker of water is less than one square meter. But if you turn the water in that beaker into microdroplets, you will get about 3,000 square meters of surface area – about the size of half a football field,” Zare said.
The team is exploring ways to utilize the nanostructures for various catalytic and biomedical applications and to refine their technique to create gold films.
“We observed a network of nanowires that may allow the formation of a thin layer of nanowires,” Samanta said.
While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.
For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),
Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.
Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research. The recent paper acceptance rate for SIGGRAPH has been less than 26%. The submitted papers are peer-reviewed in a single-blind process. There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress. …
This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014. The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,
While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.
“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”
SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”
That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.
CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.
All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.
“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”
Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.
The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.
The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”
The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.
Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.
About ACM, ACM SIGGRAPH, and SIGGRAPH 2018
ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.
They have provided an image illustrating what they mean (I don’t find it especially informative),
Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn
Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.
Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.
“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”
For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.
SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.
“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”
This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”
Apparently this is a still from the ‘short’,
Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios
Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.
Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.
“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”
To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.
Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec
to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.
The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)
Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.
Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.
“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”
I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,
Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck
Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.
“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”
The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.
“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”
Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.
Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.
“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.
The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.
In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.
And, even in its current state, the results are worth the wait.
“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”
Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.
Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.
Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,
The researchers have also provided this image,
By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)
It does seem like we’re synthesizing the world around us, eh?
SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.
The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.
Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”
He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”
Highlights from the 2018 Art Gallery include:
Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver
TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.
Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara
Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”
Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University
Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.
In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.
The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.
To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.
“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.
Art Papers highlights include:
Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth
This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.
Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong
The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.
Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University
“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.
What’s the what?
My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.
I guess if you’re going to use bacteria as part of your gene editing technology (CRISPR [clustered regularly interspaced short palindromic repeats]/Cas9) then, you might half expect the body’s immune system may have developed some defenses. A Jan. 9, 2018 article by Sarah Zhang for The Atlantic provides some insight into what the new research suggests (Note: Links have been removed),
2018 is supposed to be the year of CRISPR in humans. The first U.S. and European clinical trials that test the gene-editing tool’s ability to treat diseases—such as sickle-cell anemia, beta thalassemia, and a type of inherited blindness—are slated to begin this year.
But the year has begun on a cautionary note. On Friday [January 5, 2018], Stanford researchers posted a preprint (which has not been peer reviewed) to the website biorXiv highlighting a potential obstacle to using CRISPR in humans: Many of us may already be immune to it. That’s because CRISPR actually comes from bacteria that often live on or infect humans, and we have built up immunity to the proteins from these bacteria over our lives.
Not all CRISPR therapies in humans will be doomed. “We don’t think this is the end of the story. This is the start of the story,” says Porteus [Matthew Porteus, a pediatrician and stem-cell researcher at Stanford]. There are likely ways around the problem of immunity to CRISPR proteins, and many of the early clinical trials appear to be designed around this problem.
Porteus and his colleagues focused on two versions of Cas9, the bacterial protein mostly commonly used in CRISPR gene editing. One comes from Staphylococcus aureus, which often harmlessly lives on skin but can sometimes causes staph infections, and another from Streptococcus pyogenes, which causes strep throat but can also become “flesh-eating bacteria” when it spreads to other parts of the body. So yeah, you want your immune system to be on guard against these bacteria.
The human immune system has a couple different ways of recognizing foreign proteins, and the team tested for both. First, they looked to see if people have molecules in their blood called antibodies that can specifically bind to Cas9. Among 34 people they tested, 79 percent had antibodies against the staph Cas9 and 65 percent against the strep Cas9.
The Stanford team only tested for preexisting immunity against Cas9, but anytime you inject a large bacterial protein into the human body, it can provoke an immune response. After all, that’s how the immune system learns to fight off bacteria it’s never seen before. (Preexisting immunity can make the response faster and more robust, though.)
The danger of the immune system turning on a patient’s body hangs over a lot of research into correcting genes. In the late 1990s and 2000s, research into gene therapy was derailed by the death of 18-year-old Jesse Gelsinger, who died from an immune reaction to the virus used to deliver the corrected gene. This is the worst-case scenario that the CRISPR world hopes to avoid.
This year could be a defining one for CRISPR, the gene editing technique, which has been hailed as an important breakthrough in laboratory research. That’s because the first company-sponsored clinical studies will be conducted to see if it can help treat diseases in humans, according to an article in Chemical & Engineering News (C&EN), the weekly newsmagazine of the American Chemical Society.
C&EN Assistant Editor Ryan Cross reports that a big push is coming from industry, specifically from three companies that are each partly founded by one of the three inventors of the method. They are zeroing in on the blood diseases called sickle-cell anemia and β-thalassemia, mostly because their precise cause is known. In these diseases, hemoglobin doesn’t function properly, leading to severe health issues in some people. Crispr Therapeutics and Intellia Therapeutics plan to test the technique to boost levels of an alternative version of healthy hemoglobin. Editas Medicine, however, will also use CRISPR to correct mutations in the faulty hemoglobin gene. Labs led by university researchers are also joining the mix, starting or continuing clinical trials with the approach in 2018.
Because CRISPR is being used to cut a cell’s DNA and insert a new sequence, concerns have been raised about the potential for accidents. A cut in the wrong place could mean introducing a new mutation that could be benign — or cancerous. But according to proponents of the method, researchers are conducting extensive computer predictions and in vitro tests to help avoid this outcome.
The January 8, 2018 Chemical and Engineering News (C&EN) open access article by Ryan Cross is here.
Finally, if you are interested in how this affects research as it’s being developed, there’s University of British Columbia researcher Rosie Redfield’s January 16, 2018 posting on RRResearch blog,
Thursday’s [January 11, 2018] post described the hypothesis that bacteria might use gene transfer agent particles to inoculate other cells in the population with fragments of phage DNA, and outlined an experiment to test this. Now I’m realizing that I need to know a lot more about the kind of immunity I should expect to see if this GTA-as-vaccine hypothesis is correct.
That should give you some idea of what I meant by “research as it’s being developed.” Redfield’s blog is not for the mildly interested.
Redfield is well-known internationally as being one of the first to refute research which suggested the existence of an ‘arsenic bacterium’ (see my Dec. 8, 2010 posting: My apologies for arsenic blooper. She’s first mentioned in the second excerpt, second paragraph.) The affair was known online as #arseniclife. There’s a May 27, 2011 essay by Carl Zimmer on Slate titled: The Discovery of Arsenic-Based Twitter: How #arseniclife changed science.
A March 22, 2018 EuroScience Open Forum (ESOF) 2018 announcement (received via email) trumpets some of the latest news for this event being held July 9 to July 14, 2018 in Toulouse, France. (Located in the south in the region known as the Occitanie, it’s the fourth largest city in France. Toulouse is situated on the River Garonne. See more in its Wikipedia entry.) Here’s the latest from the announcement,
ESOF 2018 Plenary Sessions
Top speakers and hot topics confirmed for the Plenary Sessions at ESOF 2018
Lorna Hughes, Professor at the University of Glasgow, Chair of the Europeana Research Advisory Board, will give a plenary keynote on “Digital humanities”. John Ioannidis, Professor of Medicine and of Health Research and Policy at Stanford University, famous for his PLoS Medicine paper on “Why most Published Research Findings are False”, will talk about “Reproducibility”. A third plenary will involve Marìa Teresa Ruiz, a Chilean astronomer and the 2017 L’Oreal UNESCO award for Women in Science: she will talk about exoplanets.
ESOF under the spotlights
French President’s high patronage: ESOF is at the top of the institutional agendas in 2018.
“Sharing science”. But also putting science at the highest level making it a real political and societal issue in a changing world. ESOF 2018 has officially received the “High Patronage” from the President of the French Republic Emmanuel Macron. ESOF 2018 has also been listed by the French Minister for Europe and Foreign Affairs among the 27 priority events for France.
A constellation of satellites around the ESOF planet!
Second focus on Satellite events:
– 4th GEO Blue Planet Symposium organised 4-6 July by Mercator Ocean.
– ECSJ 2018, 5th European Conference of Science Journalists, co-organised by the French Association of Science Journalists in the News Press (AJSPI) and the Union of European Science Journalists’ Associations (EUSJA) on 8 July.
– Esprit de Découvertes (Discovery spirit) organised by the Académie des Sciences, Inscriptions et Belles Lettres de Toulouse on 8 July.
More Satellite events to come! Don’t forget to stay long enough in order to participate in these focused Satellite Events and … to discover the city.
A unique feature of ESOF is the Science meets Poetry day, which is held at every Forum and brings poets and scientists together.
Indeed, there is today a real artistic movement of poets connected with ESOF. Famous participants from earlier meetings include contributors such as the late Seamus Heaney, Roald Hoffmann [sic] Jean-Pierre Luminet and Prince Henrik of Denmark, but many young and aspiring poets are also involved.
The meeting is in two parts:
lectures on subjects involving science with poetry
a poster session for contributed poems
There are competitions associated with the event and every Science meets Poetry day gives rise to the publication of Proceedings in book form.
In Toulouse, the event will be staged by EuroScience in collaboration with the Académie des Jeux Floraux of Toulouse, the Société des Poètes Français and the European Academy of Sciences Arts and Letters, under patronage of UNESCO. The full programme will be announced later, but includes such themes as a celebration of the number 7 in honour of the seven Troubadours of Toulouse, who held the first Jeux Floraux in the year 1323, Space Travel and the first poets and scientists who wrote about it (including Cyrano de Bergerac and Johannes Kepler), from Metrodorus and Diophantes of Alexandria to Fermat’s Last Theorem, the Poetry of Ecology, Lafayette’s ship the Hermione seen from America and many other thought-provoking subjects.
The meeting will be held in the Hôtel d’Assézat, one of the finest old buildings of the ancient city of Toulouse.
Exceptionally, it will be open to registered participants from ESOF and also to some members of the public within the limits of available space.
Tentative Programme for the Science meets Poetry day on the 12th of July 2018
(some Speakers are still to be confirmed)
09:00 – 09:30 A welcome for the poets : The legendary Troubadours of Toulouse and the poetry of the number 7 (Philippe Dazet-Brun, Académie des Jeux Floraux)
09:30 – 10:00 The science and the poetry of violets from Toulouse (Marie-Thérèse Esquerré-Tugayé Laboratoire de Recherche en Sciences Végétales, Université Toulouse III-CNRS)
10:00 –10:30 The true Cyrano de Bergerac, gascon poet, and his celebrated travels to the Moon (Jean-Charles Dorge, Société des Poètes Français)
10:30 – 11:00 Coffee Break (with poems as posters)
11:00 – 11:30 Kepler the author and the imaginary travels of the famous astronomer to the Moon. (Uli Rothfuss, die Kogge International Society of German-language authors )
11:30 – 12:00 Spoutnik and Space in Russian Literature (Alla-Valeria Mikhalevitch, Laboratory of the Russian Academy of Sciences Saint-Petersburg)
12:00 – 12:30 Poems for the planet Mars (James Philip Kotsybar, the ‘Bard of Mars’, California and NASA USA)
12:30 – 14:00 Lunch and meetings of the Juries of poetry competitions
14:00 – 14:30 The voyage of the Hermione and « Lafayette, here we come ! » seen by an American poet (Nick Norwood, University of Columbus Ohio)
14:30 – 15:00 Alexandria, Toulouse and Oxford : the poem rendered by Eutrope and Fermat’s Last Theorem (Chaunes [Jean-Patrick Connerade], European Academy of Sciences, Arts and Letters, UNESCO)
15:00 –15:30 How biology is celebrated in contemporary poetry (Assumpcio Forcada, biologist and poet from Barcelona)
15:30 – 16:00 A book of poems around ecology : a central subject in modern poetry (Sam Illingworth, Metropolitan University of Manchester)
16:00 – 16:30 Coffee break (with poems as posters)
16:30 – 17:00 Toulouse and Europe : poetry at the crossroads of European Languages (Stefka Hrusanova (Bulgarian Academy and Linguaggi-Di-Versi)
17:00 – 17:30 Round Table : seven poets from Toulouse give their views on the theme : Languages, invisible frontiers within both science and poetry
17:30 – 18:00 The winners of the poetry competitions are announced
18:00 – 18:15 Chaunes. Closing remarks
I’m fascinated as in all the years I’ve covered the European City of Science events I’ve never before tripped across a ‘Science meets Poetry’ meeting. Sadly, there’s no contact information for those organizers. However, you can sign up for a newsletter and there are contacts for the larger event, European City of Science or as they are calling it in Toulouse, the Science in the City Festival,
Camille Rossignol (Toulouse Métropole)
+33 (0)5 36 25 27 83
François Lafont (ESOF 2018 / So Toulouse)
+33 (0)5 61 14 58 47
Travel grants for media types
One last note and this is for journalists. It’s still possible to apply for a travel grant, which helps ease but not remove the pain of travel expenses. From the ESOF 2018 Media Travel Grants webpage,
ESOF 2018 – ECSJ 2018 Travel Grants
The 5th European Conference of Science Journalists (ECSJ2018) is offering 50 travel + accommodation grants of up to 400€ to international journalists interested in attending ECSJ and ESOF.
We are looking for active professional journalists who cover science or science policy regularly (not necessarily exclusively), with an interest in reflecting on their professional practices and ethics. Applicants can be freelancers or staff, and can work for print, web, or broadcast media.
Springer Nature is a leading research, educational and professional publisher, providing quality content to its communities through a range of innovative platforms, products and services and is home of trusted brands including Nature Research.
Nature Research has supported ESOF since its very first meeting in 2004 and is funding the Nature Travel Grant Scheme for journalists to attend ESOF2018 with the aim of increasing the impact of ESOF. The Nature Travel Grant Scheme offers a lump sum of £400 for journalists based in Europe and £800 for journalists based outside of Europe, to help cover the costs of travel and accommodation to attend ESOF2018.
Researchers at Stanford University have developed an index for measuring (tracking) the progress made by artificial intelligence (AI) according to a January 9, 2018 news item on phys.org (Note: Links have been removed),
Since the term “artificial intelligence” (AI) was first used in print in 1956, the one-time science fiction fantasy has progressed to the very real prospect of driverless cars, smartphones that recognize complex spoken commands and computers that see.
In an effort to track the progress of this emerging field, a Stanford-led group of leading AI thinkers called the AI100 has launched an index that will provide a comprehensive baseline on the state of artificial intelligence and measure technological progress in the same way the gross domestic product and the S&P 500 index track the U.S. economy and the broader stock market.
For anyone curious about the AI100 initiative, I have a description of it in my Sept. 27, 2016 post highlighting the group’s first report or you can keep on reading.
“The AI100 effort realized that in order to supplement its regular review of AI, a more continuous set of collected metrics would be incredibly useful,” said Russ Altman, a professor of bioengineering and the faculty director of AI100. “We were very happy to seed the AI Index, which will inform the AI100 as we move forward.”
The AI100 was set in motion three years ago when Eric Horvitz, a Stanford alumnus and former president of the Association for the Advancement of Artificial Intelligence, worked with his wife, Mary Horvitz, to define and endow the long-term study. Its first report, released in the fall of 2016, sought to anticipate the likely effects of AI in an urban environment in the year 2030.
Among the key findings in the new index are a dramatic increase in AI startups and investment as well as significant improvements in the technology’s ability to mimic human performance.
The AI Index tracks and measures at least 18 independent vectors in academia, industry, open-source software and public interest, plus technical assessments of progress toward what the authors call “human-level performance” in areas such as speech recognition, question-answering and computer vision – algorithms that can identify objects and activities in 2D images. Specific metrics in the index include evaluations of academic papers published, course enrollment, AI-related startups, job openings, search-term frequency and media mentions, among others.
“In many ways, we are flying blind in our discussions about artificial intelligence and lack the data we need to credibly evaluate activity,” said Yoav Shoham, professor emeritus of computer science.
“The goal of the AI Index is to provide a fact-based measuring stick against which we can chart progress and fuel a deeper conversation about the future of the field,” Shoham said.
Shoham conceived of the index and assembled a steering committee including Ray Perrault from SRI International, Erik Brynjolfsson of the Massachusetts Institute of Technology and Jack Clark from OpenAI. The committee subsequently hired Calvin LeGassick as project manager.
“The AI Index will succeed only if it becomes a community effort,” Shoham said.
Although the authors say the AI Index is the first index to track either scientific or technological progress, there are many other non-financial indexes that provide valuable insight into equally hard-to-quantify fields. Examples include the Social Progress Index, the Middle East peace index and the Bangladesh empowerment index, which measure factors as wide-ranging as nutrition, sanitation, workload, leisure time, public sentiment and even public speaking opportunities.
Among the findings of this inaugural index is that the number of active AI startups has increased 14-fold since 2000. Venture capital investment has increased six times in the same period. In academia, publishing in AI has increased a similarly impressive nine times in the last 20 years while course enrollment has soared. Enrollment in the introductory AI-related machine learning course at Stanford, for instance, has grown 45-fold in the last 30 years.
In technical metrics, image and speech recognition are both approaching, if not surpassing, human-level performance. The authors noted that AI systems have excelled in such real-world applications as object detection, the ability to understand and answer questions and classification of photographic images of skin cancer cells
Shoham noted that the report is still very U.S.-centric and will need a greater international presence as well as a greater diversity of voices. He said he also sees opportunities to fold in government and corporate investment in addition to the venture capital funds that are currently included.
In terms of human-level performance, the AI Index suggests that in some ways AI has already arrived. This is true in game-playing applications including chess, the Jeopardy! game show and, most recently, the game of Go. Nonetheless, the authors note that computers continue to lag considerably in the ability to generalize specific information into deeper meaning.
“AI has made truly amazing strides in the past decade,” Shoham said, “but computers still can’t exhibit the common sense or the general intelligence of even a 5-year-old.”
The AI Index was made possible by funding from AI100, Google, Microsoft and Toutiao. Data supporting the various metrics were provided by Elsevier, TrendKite, Indeed.com, Monster.com, the Google Trends Team, the Google Brain Team, Sand Hill Econometrics, VentureSource, Crunchbase, Electronic Frontier Foundation, EuroMatrix, Geoff Sutcliffe, Kevin Leyton-Brown and Holger Hoose.
You can find the AI Index here. They’re featuring their 2017 report but you can also find data (on the menu bar on the upper right side of your screen), along with a few provisos. I was curious as to whether any AI had been used to analyze the data and/or write the report. A very cursory look at the 2017 report did not answer that question. I’m fascinated by the failure to address what I think is an obvious question. It suggests that even very, very bright people can become blind and I suspect that’s why the group seems quite eager to get others involved, from the 2017 AI Index Report,
As the report’s limitations illustrate, the AI Index will always paint a partial picture. For this reason, we include subjective commentary from a cross-section of AI experts. This Expert Forum helps animate the story behind the data in the report and adds interpretation the report lacks.
Finally, where the experts’ dialogue ends, your opportunity to Get Involved begins [emphasis mine]. We will need the feedback and participation of a larger community to address the issues identified in this report, uncover issues we have omitted, and build a productive process for tracking activity and progress in Artificial Intelligence. (p. 8)
Unfortunately, it’s not clear how one becomes involved. Is there a forum or do you get in touch with one of the team leaders?
I wish them good luck with their project and imagine that these minor hiccups will be dealt with in near term.
In no particular order, here are some Frankenstein bits and bobs in celebration of the 200th anniversary of the publication of Mary Shelley’s book.
The Frankenstein Bicentennial Project
This project at Arizona State University has been featured here a few times and most recently in a October 26, 2016 posting about an artist using a Roomba (robotic vacuum cleaner) in an artistic query and about the Frankenstein at 200 online exhibition.
A free, interactive, multiplatform experience for kids designed to inspire deeper engagement with STEM topics and promote the development of 21st century skills related to creative collaboration and critical thinking.
A collaborative, multimedia reading experiment with Mary Shelley’s timeless tale examining the the scientific, technological, political, and ethical dimensions of the novel, its historical context, and its enduring legacy.
A set of hands-on STEM making activities that use the Frankenstein story to inspire deeper conversations about scientific and technological creativity and social responsibility.
How to Make a Monster
Kathryn Harkup in a February 22, 2018 article about her recent book for the Guardian delves into the science behind Mary Shelley’s Frankenstein (Note: Links have been removed),
The bicentenary of the publication of Mary Shelley’s Frankenstein: or the Modern Prometheus has meant a lot of people are re-examining this brilliant work of science fiction. My particular interest is the science fact behind the science fiction. How much real science influenced Mary Shelley? Could a real-life Victor Frankenstein have constructed a creature?
In terms of the technical aspects of building a creature from scraps, many people focus on the collecting of the raw materials and reanimation stages. It’s understandable as there are many great stories about grave-robbers and dissection rooms as well as electrical experiments that were performed on recently executed murderers. But there quite a few stages between digging up dead bodies and reanimating a creature.
The months of tedious and fiddly surgery to bring everything together are often glossed over, but what virtually no one mentions is how difficult it would have been to keep the bits and pieces in a suitable state of preservation while Victor worked on his creation. Making a monster takes time, and bodies rot very quickly.
Preservation of anatomical material was of huge interest when Frankenstein was written, as it is now, though for very different reasons. Today the interest is in preserving organs and tissues suitable for transplant. Some individuals even want to cryogenically freeze their entire body in case future scientists are able to revive them and cure whatever disease caused their original death. In that respect the aims are not so different from what the fictional Victor Frankenstein was attempting two hundred years ago.
At the time Frankenstein is set, the late 18th century, few people were really thinking about organ transplant. Instead, tissue preservation was of concern for anatomy professors who wanted to maintain collections of interesting, unusual or instructive specimens to use as teaching aids for future students.
She provides fascinating insight into preservation techniques of the 18th century and their dangers,
To preserve soft tissues, various substances were injected into or used to coat or soak the dissected specimen. The substance in question had to be toxic enough to destroy mould and bacteria that could decompose the sample, but not corrosive or damaging to the tissues of the specimen itself.
Substances such as turpentine, mercury metal and mercury salts (which are even more toxic than the pure element) were all employed stop the decay process in its tracks. Killing off bacteria and mould means that some vital process within them has been stopped; however, many processes that are critical to mould and bacteria are also necessary for humans, making these substances toxic to us.
Working in cramped, poorly ventilated conditions with minimal regard for health and safety, the substances anatomical curators were using day in and day out took a serious toll on their health. Anatomical curators were described as emaciated, prematurely aged and with a hacking cough. …
One of the most successful techniques for tissue preservation was bottling in alcohol. …
In the 18th century the University of Edinburgh handed over twelve gallons of whisky annually to the anatomy museum for the preservation of specimens. Possible not all of those twelve gallons made it into the specimen jars. The nature of the curator’s work – the smell, the problems with vermin and toxic fumes – must have made the odd sip of whisky very tempting. Indeed, more than one curator was dismissed for being drunk on the job.
Shelley described Frankenstein working in a small attic room using candlelight to illuminate his work. Small rooms, toxic vapours, alcohol fumes and naked flames are not a healthy combination. No wonder Shelley wrote the work took such a toll on Frankenstein’s health.
The year 1818 saw the publication of one of the most influential science-fiction stories of all time. Frankenstein: Or, Modern Prometheus by Mary Shelley had a huge impact on gothic horror and science-fiction genres, and her creation has become part of our everyday culture, from cartoons to Hallowe’en costumes. Even the name ‘Frankenstein’ has become a by-word for evil scientists and dangerous experiments. How did a teenager with no formal education come up with the idea for an extraordinary novel such as Frankenstein?
Clues are dotted throughout Georgian science and popular culture. The years before the book’s publication saw huge advances in our understanding of the natural sciences, in areas such as electricity and physiology, for example. Sensational science demonstrations caught the imagination of the general public, while the newspapers were full of lurid tales of murderers and resurrectionists.
Making the Monster explores the scientific background behind Mary Shelley’s book. Is there any science fact behind the science fiction? And how might a real-life Victor Frankenstein have gone about creating his monster? From tales of volcanic eruptions, artificial life and chemical revolutions, to experimental surgery, ‘monsters’ and electrical experiments on human cadavers, Kathryn Harkup examines the science and scientists that influenced Shelley, and inspired her most famous creation.
The Frankenstein 2018 project is based at Volda University College in Norway, but aims to engage and include people from elsewhere in Norway and around the world.
The project is led by Timothy Saunders, an Associate Professor of English Literature and Culture at Volda University College.
If you would like to get in touch, either to offer comments on the website, to provide information about related projects or activities taking place around the world, or even to offer relevant material of your own, please write to me at email@example.com.
What a great idea and I wish the folks at Volda University College all the best.
The Monster Challenge
Washington University in St. Louis (WUSL; Missouri, US) is hosting a competition to create a ‘new Frankenstein’, from WUSL’s The Monster Challenge webpage,
On June 16, 1816, a 19-year-old woman sat quietly listening as her lover (the poet Percy Bysshe Shelley) and a small group of friends — including celebrated poet Lord Byron — discussed conducting a ghost-story contest. The couple was spending their holiday in a beautiful mansion on the banks of scenic Lake Geneva in Switzerland. As the conversation about ghost stories heated up, a discussion arose about the principle of life. Not surprisingly, the ensuing talk of graves and corpses led to a sleepless night filled with horrific nightmares for Mary Shelley. Later, she recalled her own contest entry began with eight words; “It was on a dreary night in November…” Just two years later, in 1818, that young woman, Mary Shelley, published her expanded submission as the novel Frankenstein, not only a classic of 19th-century fiction, but a work that has enjoyed immense influence on popular culture, science, medicine, philosophy and the arts all the way up to the present day.
THE MONSTER CHALLENGE
Commemorating the 200th anniversary of the novel’s publication in 1818, Washington University is hosting a competition open to WU students (full time and registered in fall 2018), both undergraduate and graduate. The submission deadline is October 15, 2018.
The prompt for our own WU “Monster Challenge” is “The New Frankenstein”:
If you learned of a contest today, similar to the one that inspired the publication of Mary Shelley’s Frankenstein in 1818, what new Frankenstein would you create? Winning entries will be those best exemplifying the spirit, tone and feeling of Frankenstein for our age.
Submissions are eligible in two categories: written (including poetry, fiction, nonfiction and theater; 5000 word limit) and visual (including new media, experimental media, sound art, performance art, and design). Only one submission is allowed per student or student collaboration group. The winners will be determined by a jury of faculty members and announced in the fall 2018 semester. Winning entries will also be featured on the Frankenstein Bicentennial website (frankenstein200.wustl.edu).
Through the generosity of Provost Holden Thorpe’s office, winners will receive a cash prize as well as the opportunity to have their submission read, exhibited, and/or performed during the fall 2018 semester. Prizes are as follows:
WRITTEN CATEGORY VISUAL CATEGORY
Grand Prize: $1000 Grand Prize: $1000
2nd Prize: $500 2nd Prize: $500
3rd Prize: $250 3rd Prize: $250
HOW TO SUBMIT
Please review the guidelines below and download the appropriate submission form … for your project.
All submissions are due by 3 pm on October 15, 2018.
Only one submission is allowed per student or student collaboration group.
Electronic submissions should be emailed to firstname.lastname@example.org along with the appropriate submission form (right).
Non-electronic submissions should be dropped off at the Performing Arts Department in Mallinckrodt Center, Room 312 (specific dates and times to be determined). All applicants submitting work here must also send an email to email@example.com with a digital image of the work and the appropriate submission form (right). Entries should fit into a case 74″ w x 87″ h x 23″ d. For exceptions, please contact Professor Patricia Olynyk (firstname.lastname@example.org).
For additional information about the contest, please contact the Interdisciplinary Project in the Humanities: email@example.com.
One of the most famous literary works of the last two centuries, Mary Shelley’s Frankenstein (1818) permeates our cultural imagination. A man of science makes dead matter live yet abandons his own creation. A creature is composed of human body parts yet denied a place in human society. The epic struggle that ensues between creator and creature poses enduring questions to all of us. What do we owe our non-human creations? How might the pursuit of scientific knowledge endanger or empower humanity? How do we combine social responsibility with our technological power to alter living matter? These moral quandaries drive the novel as well as our own hopes and fears about modernity.
Over the last 200 years, Frankenstein has also become one of our most culturally productive myths. The Black Frankenstein became a potent metaphor for racial otherness in the 19th century and remains so to this day. From Boris Karloff as the iconic Monster of 1931 to the transvestite Dr. Frank-N-Furter in The Rocky Horror Picture Show of 1975, the novel has inspired dozens of films and dramatizations. Female poets from Margaret Atwood to Liz Lochhead and Laurie Sheck continue to wrestle with the novel’s imaginative possibilities. And Frankenstein, of course, permeates our material culture. Think no further than Franken Berry cereal, Frankenstein action figures, and Frankenstein bed pillows.
Please join us at Washington University in St. Louis as we celebrate Mary Shelley’s iconic novel and its afterlives with a series of events organized by faculty, students and staff from across the arts, humanities and life sciences. Highlights include the conference Frankenstein at 200, sponsored by the Center for the Humanities; a special Frankenstein issue of The Common Reader; a staging of Nick Dear’s play Frankenstein; the symposium The Curren(t)cy of Frankenstein, sponsored by the Medical School; a film series; several lectures; and exhibits designed to showcase the university’s museum and library collections.
This site aggregates all events related to the celebration. Please visit again for updates!
They do have a page for Global Celebrations and while the listing isn’t really global at this point (I’m sure they’re hoping that will change) it does open up a number of possibilities for Frankenstein aficionados, experts, and enthusiasts,
Technologies of Frankenstein
Stevens Institute of Technology, College of Arts and Letters and IEEE History Center
The 200th anniversary year of the first edition of Mary Shelley’s Frankenstein: Or, The Modern Prometheus has drawn worldwide interest in revisiting the novel’s themes. What were those themes and what is their value to us in the early twenty-first century? In what ways might our tools of science and communication serve as an “elixir of life” since the age of Frankenstein?
Frankenstein@200 is a year-long series of academic courses and programs including a film festival, a play, a lecture series and an international Health Humanities Conference that will examine the numerous moral, scientific, sociological, ethical and spiritual dimensions of the work, and why Dr. Frankenstein and his monster still capture the moral imagination today..
San Jose State University, Santa Clara University, and University of San Francisco
During 2018, the San Francisco Bay area partners will host The Frankenstein Bicentennial. The novel brings together STEM fields with humanities & the arts in such a way to engage almost every discipline and major. The project’s events will address timely issues of our world in Silicon Valley and the advent of technology – a critical topic with questions important to our academic, regional and world communities. The novel, because it has been so popular for 200 years, lives on in discussions about what it means to be human in a digital world.
Next performance: Monday Feb. 26, 2018; 7 PM
Extended through 2018!
“..it is a success of a show that should be considered
something great in the realm of musical theater.”
“A musical love letter”
– Local Theatre NY
“…infused with enough emotion to send chills down the spine…”
– Local Theatre NY
““ an ambitious theater piece that is refreshingly buoyed up by its music””
– Theater Scene
a new Off-Broadway musical by Eric B. Sirota
based on Mary Shelley’s classic novel
Presented by John Lant, Tamra Pica & Write Act Repertory
at St. Luke’s Theater in the heart of the theatre district
. . . a sweeping romantic musical, about the human need for love and companionship,
which honors its source material.
Performances Monday nights at 7 PM
tickets to performances into March currently on sale
(scroll down for performance schedule)
Contact us for Special Group Sales and Buyouts at: info@TheFrankensteinMusical.com
St. Luke’s Theatre
an Off-Broadway venue in the heart of the theatre district on “Restaurant Row”
308 West 46th Street (btwn. 8th and 9th Ave.)
– Book, Music & Lyrics: Eric B. Sirota
-Additional lyrics: Julia Sirota
– Director: Clint Hromsco
– Music Director: Austin Nuckols
(original music direction by Anessa Marie)
– Producer: John Lant, Tamra Pica and Write Act Repertory
– CAST: Jon Rose, Erick Sanchez-Canahuate, Gabriella Marzetta, Stephan Amenta, Cait Kiley, Adam Kee, Samantha Collette, Amy Londyn, Stephanie Lourenco Viegas, Bryan S. Walton
Eric Sirota developed Frankenstein under the working title of “Day of Wrath”, an Official Selection of the 2015 New York Musical Theatre Festival’s Reading Series
Feb 26, Mon; 7 PM
Mar 5, Mon; 7 PM
Tickets to later dates on sale soon. . .
March 12, 19, 24
April 2, 9, 16, 23, 30
May . . .
Jun . . .
running though 2018
2018 – Frankenstein bicentennial year!
The Purgatory Press*
The Purgatory Press blog’s* John Culbert (author and lecturer at the University of British Columbia) wrote a January 1, 2018 essay celebrating and examining Mary Shelley’s classic,
She was born in 1797, toward the end of the Little Ice Age. Wolves had been extirpated from the country, but not so long ago that one could forget. Man’s only predator in the British Isles was now a mental throwback. Does the shadow of extinction fall on the children of perpetrators? What strange gap is left in the mind of men suddenly raised from the humble status of prey?
In the winter of her sixteenth year, the river Thames froze in London for the last time. The final “Frost Fair,” a tradition dating back centuries, was held February 1814 on the river’s hard surface.
The following year, a volcano in present-day Indonesia erupted. It was the most powerful and destructive event of its kind in recorded history. Fallout caused a “volcanic winter” across the Northern Hemisphere. In 1816 – “the year without a summer” – she was in Switzerland, where she began writing her first novel, Frankenstein, published 200 years ago today — on January 1st, 1818.
Fascinating, yes? I encourage you to read the whole piece.
3–8 April (with special events on 28 March and 27–28 April)
The Science Museum is celebrating the 200th anniversary of Mary Shelley’s Frankenstein or the Modern Prometheus with a free festival exploring the science behind this cultural phenomenon.
Through immersive theatre, experimental storytelling and hands-on activities visitors can examine the ethical and scientific questions surrounding the artificial creation of life. Families can step in Doctor Frankenstein’s shoes, creating a creature and bringing it to life using stop motion animation at our drop-in workshops.
In the Mystery at Frankenstein’s Lab visitors can solve puzzles and conduct experiments in an escape room-like interactive experience. Visitors are also invited to explore the Science Museum as you’ve never heard it before in It’s Alive, an immersive Frankenstein-themed audio tour. Both these activities have limited availability so pre-booking is advised.
In Pandemic, you decide how far Dr Victor should go to tackle a virus sweeping the world. Is it right to create new life to save others? You decide where to draw the line in this choose-your-own-adventure experience. Visitors can also see Humanity 2.0, a play created and performed by actor Emily Carding. Set in a post-apocalyptic future, the play examines what could happen if a benevolent AI recreated humanity.
As part of the festival, visitors will meet researchers at the cutting-edge of science—from bio chemists who manipulate DNA to engineers creating artificial intelligence—and discover fascinating scientific objects with our curators which could have influenced Shelley.
The Frankenstein Festival will run daily from 3–8 April at the Science Museum and is supported by players of People’s Postcode Lottery. Tickets for activities with limited availability are available from sciencemuseum.org.uk/Frankenstein.
Our free adult-only Frankenstein Lates on 28 March will focus on the darker themes of Shelley’s iconic novel, with the Promethean Tales Weekend on 27–28 April, featuring panel discussions and special screenings of Terminator 2: Judgement Day and The Curse of Frankenstein in our IMAX cinema.
Frankenstein Festival activities include:
An immersive audio tour created by Cmd+Shift in collaboration with the Science Museum. The tour takes 45 minutes and is limited to 15 people per session. Recommended for ages 8+. Tickets cost £3 and are available here.
Mystery at Frankenstein’s Lab
This interactive, theatrical puzzle experience has been created by Atomic Force Productions, in collaboration with the Science Museum. Each session lasts 45 minutes and is limited to 10 people per session. Recommended for ages 12+, under 16s must be accompanied by an adult. Tickets cost £10 and are available here.
Create Your Own Creature
Get hands on at our drop-in workshops and create your very own creature. Then bring your creature to life with stop motion animation. This activity takes approximately 20 minutes and is suitable for all ages.
Humanity 2.0 (3–5 April)
Step into a dystopian future and help shape the future of humanity in this unique interactive play created and performed by Emily Carding. Her full body make-up was created by award winning body painter Victoria Gugenheim in collaboration with the Science Museum. The play has a run time of 45 minutes and is recommended for ages 12+.
Pandemic (5–8 April)
This choose-your-own-adventure film puts you in control of a psychological thriller. Your decisions will guide Dr Victor on their quest to create artificial life.
Pandemic was created by John Bradburn in collaboration with the Science Museum. The film contains moderate psychological threat and horror sequences that some people may find disturbing. The experiences lasts 45 minutes and is recommended for ages 14+. Tickets are free and are available here.
Frankenstein Festival events include:
Wednesday 28 March, 18.45–22.00
Join us for a fun free evening of events, workshops and screenings as we ask the question ‘should we create life’.
Lates is a free themed-event for adults at the Science Museum on the last Wednesday of each month. Find out more about Lates at sciencemuseum.org.uk/Lates.
Artificial Life: Should We, Could We, Will We?
Wednesday 28 March as part of the Frankenstein Lates
A panel of expert scientists and researchers will discuss artificial life. Just how close are we to creating fully synthetic life and will this be achieved by biological or digital means?
Discussing those questions will be Professor of Cognitive Robotics at Imperial College and scientific advisor for the hit movie Ex Machina Murray Shanahan, Vice President of the International Society for Artificial Life Susan Stepney and Lead Curator of the Science Museum’s acclaimed 2017 exhibition Robots Ben Russell. Further speakers to be announced.
Promethean Tales Weekend
Terminator 2: Judgement Day + Panel Discussion
Friday 27 April, 19.30–22.35 (Doors open 19.00)
Tickets: £8, £6 Concessions
Age 15 and above
In part one of our Promethean Tales Weekend celebrating the 200th anniversary of Mary Shelley’s Frankenstein, we will be joined by a panel of experts in science, film and literature to discuss the topic of ‘Promethean Tales through the ages’ ahead of a screening of Terminator 2: Judgement Day.
The Curse of Frankenstein and Q&A with Sir Christopher Frayling
Saturday 28 April, 18.00–20.30 (Doors open 17.30)
Tickets: £8, £6 Concessions
In part two of our Promethean Tales Weekend, we are joined by Sir Christopher Frayling, author of Frankenstein: The First Two Hundred Years, to discuss the life and work of Shelley, the origins of her seminal story and its cultural impact.
The screening of The Curse of Frankenstein will be followed by a book signing with copies of Sir Christopher’s book available to purchase on the night.
You can find out more about the festival and get tickets to events, here.
This initiative seems like a lot of fun, from the Frankenreads homepage,
Frankenreads is an NEH [US National Endowment for the Humanitities]-funded initiative of the Keats-Shelley Association of America and partners to hold a series of events and initiatives in honor of the 200th anniversary of Mary Shelley’s Frankenstein, featuring especially an international series of readings of the full text of the novel on Halloween 2018.
They have a very open approach as their FAQs webpage attests to,
Why host a Frankenreads event?
Frankenstein, or, The Modern Prometheus appeals to both novice and expert readers alike and is a work that remains highly relevant to contemporary issues. Thus it is perhaps no surprise that (according to the Open Syllabus project) Frankenstein is the most frequently taught work of literature in college English courses and the fifth most frequently taught book in college courses in all disciplines. It is certainly one of the most read British novels in the world. Hosting a Frankenreads event is an easy way both to celebrate the 200th anniversary of this important work and to foster discussion about issues such as ethics in science and the human tendency to demonize the unfamiliar. By participating in Frankenreads, you can make sure that your thoughts about Frankenstein are part of a global conversation.
What kind of event can I host?
You can host any kind of event you like! Below are some suggestions. Click on the event type for further guidance.
Complete Reading — A live, all-day reading (about 9 hours) of the full text of Frankenstein
Viewing — A community viewing on Halloween 2018 of the livestream of the NEH reading or other online events
Other — Whatever other kind of in-person or online event you can think of!
Should I hold in-person events or online events?
Either or both! We encourage you to record in-person events and upload video to our YouTube channel. We will also be providing advice on holding events via Google Hangouts.
When should I hold the event?
You can hold a Frankenreads event any time you like, but we encourage you to schedule an event during Frankenweek: October 24-31, 2018.
Why post my event on the Frankenreads website?
Posting your event on the Frankenreads website enables the Frankenreads team to publicize your event widely, to give you help with your event, and to connect you with others who are holding nearby or similar events.
How do I post my event on the Frankenreads website?
The compound eye of a fly inspired Stanford researchers to create a compound solar cell consisting of perovskite microcells encapsulated in a hexagon-shaped scaffold. (Image credit: Thomas Shahan/Creative Commons)
An August 31, 2017 news item on Nanowerk describes research into solar cells being performed at Stanford University (Note: A link has been removed),
Packing tiny solar cells together, like micro-lenses in the compound eye of an insect, could pave the way to a new generation of advanced photovoltaics, say Stanford University scientists.
In a new study, the Stanford team used the insect-inspired design to protect a fragile photovoltaic material called perovskite from deteriorating when exposed to heat, moisture or mechanical stress. The results are published in the journal Energy & Environmental Science (“Scaffold-reinforced perovskite compound solar cells”).
“Perovskites are promising, low-cost materials that convert sunlight to electricity as efficiently as conventional solar cells made of silicon,” said Reinhold Dauskardt, a professor of materials science and engineering and senior author of the study. “The problem is that perovskites are extremely unstable and mechanically fragile. They would barely survive the manufacturing process, let alone be durable long term in the environment.”
Most solar devices, like rooftop panels, use a flat, or planar, design. But that approach doesn’t work well with perovskite solar cells.
“Perovskites are the most fragile materials ever tested in the history of our lab,” said graduate student Nicholas Rolston, a co-lead author of the E&ES study. “This fragility is related to the brittle, salt-like crystal structure of perovskite, which has mechanical properties similar to table salt.”
Eye of the fly
To address the durability challenge, the Stanford team turned to nature.
“We were inspired by the compound eye of the fly, which consists of hundreds of tiny segmented eyes,” Dauskardt explained. “It has a beautiful honeycomb shape with built-in redundancy: If you lose one segment, hundreds of others will operate. Each segment is very fragile, but it’s shielded by a scaffold wall around it.”
Scaffolds in a compound solar cell filled with perovskite after fracture testing. (Image credit: Dauskardt Lab/Stanford University)
Using the compound eye as a model, the researchers created a compound solar cell consisting of a vast honeycomb of perovskite microcells, each encapsulated in a hexagon-shaped scaffold just 0.02 inches (500 microns) wide.
“The scaffold is made of an inexpensive epoxy resin widely used in the microelectronics industry,” Rolston said. “It’s resilient to mechanical stresses and thus far more resistant to fracture.”
Tests conducted during the study revealed that the scaffolding had little effect on how efficiently perovskite converted light into electricity.
“We got nearly the same power-conversion efficiencies out of each little perovskite cell that we would get from a planar solar cell,” Dauskardt said. “So we achieved a huge increase in fracture resistance with no penalty for efficiency.”
But could the new device withstand the kind of heat and humidity that conventional rooftop solar panels endure?
To find out, the researchers exposed encapsulated perovskite cells to temperatures of 185 F (85 C) and 85 percent relative humidity for six weeks. Despite these extreme conditions, the cells continued to generate electricity at relatively high rates of efficiency.
Dauskardt and his colleagues have filed a provisional patent for the new technology. To improve efficiency, they are studying new ways to scatter light from the scaffold into the perovskite core of each cell.
“We are very excited about these results,” he said. “It’s a new way of thinking about designing solar cells. These scaffold cells also look really cool, so there are some interesting aesthetic possibilities for real-world applications.”
Researchers have also made this image available,
Caption: A compound solar cell illuminated from a light source below. Hexagonal scaffolds are visible in the regions coated by a silver electrode. The new solar cell design could help scientists overcome a major roadblock to the development of perovskite photovoltaics. Credit: Dauskardt Lab/Stanford University
Not quite as weirdly beautiful as the insect eyes.
By integrating nanomaterials , a new technique for a 3D computer chip capable of handling today’s huge amount of data has been developed. Weirdly, the first two paragraphs of a July 5, 2017 news item on Nanowerk do not convey the main point (Note: A link has been removed),
As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.
Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature (“Three-dimensional integration of nanotechnologies for computing and data storage on a single chip”), by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.
This image helps to convey the main points,
Instead of relying on silicon-based devices, a new chip uses carbon nanotubes and resistive random-access memory (RRAM) cells. The two are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. Courtesy MIT
As I hove been quite impressed with their science writing, it was a bit surprising to find that the Massachusetts Institute of Technology (MIT) had issued this news release (news item) as it didn’t follow the ‘rules’, i.e., cover as many of the journalistic questions (Who, What, Where, When, Why, and, sometimes, How) as possible in the first sentence/paragraph. This is written more in the style of a magazine article and so the details take a while to emerge, from a July 5, 2017 MIT news release, which originated the news item,
Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).
To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.
The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.
Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.
The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.
However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”
The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.
This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.
“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.
“The new 3-D computer architecture provides dense and fine-grained integration of computating and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”
To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.
Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.
Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.
“It leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” Rabaey says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”
“One big advantage of our demonstration is that it is compatible with today’s silicon infrastructure, both in terms of fabrication and design,” says Howe.
“The fact that this strategy is both CMOS [complementary metal-oxide-semiconductor] compatible and viable for a variety of applications suggests that it is a significant step in the continued advancement of Moore’s Law,” says Ken Hansen, president and CEO of the Semiconductor Research Corporation, which supported the research. “To sustain the promise of Moore’s Law economics, innovative heterogeneous approaches are required as dimensional scaling is no longer sufficient. This pioneering work embodies that philosophy.”
The team is working to improve the underlying nanotechnologies, while exploring the new 3-D computer architecture. For Shulaker, the next step is working with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system that take advantage of its ability to carry out sensing and data processing on the same chip.
So, for example, the devices could be used to detect signs of disease by sensing particular compounds in a patient’s breath, says Shulaker.
“The technology could not only improve traditional computing, but it also opens up a whole new range of applications that we can target,” he says. “My students are now investigating how we can produce chips that do more than just computing.”
“This demonstration of the 3-D integration of sensors, memory, and logic is an exceptionally innovative development that leverages current CMOS technology with the new capabilities of carbon nanotube field–effect transistors,” says Sam Fuller, CTO emeritus of Analog Devices, who was not involved in the research. “This has the potential to be the platform for many revolutionary applications in the future.”
This work was funded by the Defense Advanced Research Projects Agency [DARPA], the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.
The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,
As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.
In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.
Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.
“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”
The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.
As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.
Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.
The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.
The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.
In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.
For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.
Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”
“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”
Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.
Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.
“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”
Here’s a link to and a citation for the Princeton paper,
There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neil’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.
There are two news bits about game-changing electronics, one from the UK and the other from the US.
United Kingdom (UK)
An April 3, 2017 news item on Azonano announces the possibility of a future golden age of electronics courtesy of the University of Exeter,
Engineering experts from the University of Exeter have come up with a breakthrough way to create the smallest, quickest, highest-capacity memories for transparent and flexible applications that could lead to a future golden age of electronics.
Engineering experts from the University of Exeter have developed innovative new memory using a hybrid of graphene oxide and titanium oxide. Their devices are low cost and eco-friendly to produce, are also perfectly suited for use in flexible electronic devices such as ‘bendable’ mobile phone, computer and television screens, and even ‘intelligent’ clothing.
Crucially, these devices may also have the potential to offer a cheaper and more adaptable alternative to ‘flash memory’, which is currently used in many common devices such as memory cards, graphics cards and USB computer drives.
The research team insist that these innovative new devices have the potential to revolutionise not only how data is stored, but also take flexible electronics to a new age in terms of speed, efficiency and power.
Professor David Wright, an Electronic Engineering expert from the University of Exeter and lead author of the paper said: “Using graphene oxide to produce memory devices has been reported before, but they were typically very large, slow, and aimed at the ‘cheap and cheerful’ end of the electronics goods market.
“Our hybrid graphene oxide-titanium oxide memory is, in contrast, just 50 nanometres long and 8 nanometres thick and can be written to and read from in less than five nanoseconds – with one nanometre being one billionth of a metre and one nanosecond a billionth of a second.”
Professor Craciun, a co-author of the work, added: “Being able to improve data storage is the backbone of tomorrow’s knowledge economy, as well as industry on a global scale. Our work offers the opportunity to completely transform graphene-oxide memory technology, and the potential and possibilities it offers.”
As electronics become increasingly pervasive in our lives – from smart phones to wearable sensors – so too does the ever rising amount of electronic waste they create. A United Nations Environment Program report found that almost 50 million tons of electronic waste were thrown out in 2017–more than 20 percent higher than waste in 2015.
Troubled by this mounting waste, Stanford engineer Zhenan Bao and her team are rethinking electronics. “In my group, we have been trying to mimic the function of human skin to think about how to develop future electronic devices,” Bao said. She described how skin is stretchable, self-healable and also biodegradable – an attractive list of characteristics for electronics. “We have achieved the first two [flexible and self-healing], so the biodegradability was something we wanted to tackle.”
The team created a flexible electronic device that can easily degrade just by adding a weak acid like vinegar. The results were published in the Proceedings of the National Academy of Sciences (“Biocompatible and totally disintegrable semiconducting polymer for ultrathin and ultralightweight transient electronics”).
“This is the first example of a semiconductive polymer that can decompose,” said lead author Ting Lei, a postdoctoral fellow working with Bao.
In addition to the polymer – essentially a flexible, conductive plastic – the team developed a degradable electronic circuit and a new biodegradable substrate material for mounting the electrical components. This substrate supports the electrical components, flexing and molding to rough and smooth surfaces alike. When the electronic device is no longer needed, the whole thing can biodegrade into nontoxic components.
Bao, a professor of chemical engineering and materials science and engineering, had previously created a stretchable electrode modeled on human skin. That material could bend and twist in a way that could allow it to interface with the skin or brain, but it couldn’t degrade. That limited its application for implantable devices and – important to Bao – contributed to waste.
The flexible semiconductor can adhere to smooth or rough surfaces and biodegrade to nontoxic products. (Image credit: Bao lab)
Bao said that creating a robust material that is both a good electrical conductor and biodegradable was a challenge, considering traditional polymer chemistry. “We have been trying to think how we can achieve both great electronic property but also have the biodegradability,” Bao said.
Eventually, the team found that by tweaking the chemical structure of the flexible material it would break apart under mild stressors. “We came up with an idea of making these molecules using a special type of chemical linkage that can retain the ability for the electron to smoothly transport along the molecule,” Bao said. “But also this chemical bond is sensitive to weak acid – even weaker than pure vinegar.” The result was a material that could carry an electronic signal but break down without requiring extreme measures.
In addition to the biodegradable polymer, the team developed a new type of electrical component and a substrate material that attaches to the entire electronic component. Electronic components are usually made of gold. But for this device, the researchers crafted components from iron. Bao noted that iron is a very environmentally friendly product and is nontoxic to humans.
The researchers created the substrate, which carries the electronic circuit and the polymer, from cellulose. Cellulose is the same substance that makes up paper. But unlike paper, the team altered cellulose fibers so the “paper” is transparent and flexible, while still breaking down easily. The thin film substrate allows the electronics to be worn on the skin or even implanted inside the body.
From implants to plants
The combination of a biodegradable conductive polymer and substrate makes the electronic device useful in a plethora of settings – from wearable electronics to large-scale environmental surveys with sensor dusts.
“We envision these soft patches that are very thin and conformable to the skin that can measure blood pressure, glucose value, sweat content,” Bao said. A person could wear a specifically designed patch for a day or week, then download the data. According to Bao, this short-term use of disposable electronics seems a perfect fit for a degradable, flexible design.
And it’s not just for skin surveys: the biodegradable substrate, polymers and iron electrodes make the entire component compatible with insertion into the human body. The polymer breaks down to product concentrations much lower than the published acceptable levels found in drinking water. Although the polymer was found to be biocompatible, Bao said that more studies would need to be done before implants are a regular occurrence.
Biodegradable electronics have the potential to go far beyond collecting heart disease and glucose data. These components could be used in places where surveys cover large areas in remote locations. Lei described a research scenario where biodegradable electronics are dropped by airplane over a forest to survey the landscape. “It’s a very large area and very hard for people to spread the sensors,” he said. “Also, if you spread the sensors, it’s very hard to gather them back. You don’t want to contaminate the environment so we need something that can be decomposed.” Instead of plastic littering the forest floor, the sensors would biodegrade away.
As the number of electronics increase, biodegradability will become more important. Lei is excited by their advancements and wants to keep improving performance of biodegradable electronics. “We currently have computers and cell phones and we generate millions and billions of cell phones, and it’s hard to decompose,” he said. “We hope we can develop some materials that can be decomposed so there is less waste.”
Other authors on the study include Ming Guan, Jia Liu, Hung-Cheng Lin, Raphael Pfattner, Leo Shaw, Allister McGuire, and Jeffrey Tok of Stanford University; Tsung-Ching Huang of Hewlett Packard Enterprise; and Lei-Lai Shao and Kwang-Ting Cheng of University of California, Santa Barbara.
The research was funded by the Air Force Office for Scientific Research; BASF; Marie Curie Cofund; Beatriu de Pinós fellowship; and the Kodak Graduate Fellowship.
Here’s a link to and a citation for the team’s latest paper,
The mention of cellulose in the second item piqued my interest so I checked to see if they’d used nanocellulose. No, they did not. Microcrystalline cellulose powder was used to constitute a cellulose film but they found a way to render this film at the nanoscale. From the Stanford paper (Note: Links have been removed),
… Moreover, cellulose films have been previously used as biodegradable substrates in electronics (28⇓–30). However, these cellulose films are typically made with thicknesses well over 10 μm and thus cannot be used to fabricate ultrathin electronics with substrate thicknesses below 1–2 μm (7, 18, 19). To the best of our knowledge, there have been no reports on ultrathin (1–2 μm) biodegradable substrates for electronics. Thus, to realize them, we subsequently developed a method described herein to obtain ultrathin (800 nm) cellulose films (Fig. 1B and SI Appendix, Fig. S8). First, microcrystalline cellulose powders were dissolved in LiCl/N,N-dimethylacetamide (DMAc) and reacted with hexamethyldisilazane (HMDS) (31, 32), providing trimethylsilyl-functionalized cellulose (TMSC) (Fig. 1B). To fabricate films or devices, TMSC in chlorobenzene (CB) (70 mg/mL) was spin-coated on a thin dextran sacrificial layer. The TMSC film was measured to be 1.2 μm. After hydrolyzing the film in 95% acetic acid vapor for 2 h, the trimethylsilyl groups were removed, giving a 400-nm-thick cellulose film. The film thickness significantly decreased to one-third of the original film thickness, largely due to the removal of the bulky trimethylsilyl groups. The hydrolyzed cellulose film is insoluble in most organic solvents, for example, toluene, THF, chloroform, CB, and water. Thus, we can sequentially repeat the above steps to obtain an 800-nm-thick film, which is robust enough for further device fabrication and peel-off. By soaking the device in water, the dextran layer is dissolved, starting from the edges of the device to the center. This process ultimately releases the ultrathin substrate and leaves it floating on water surface (Fig. 3A, Inset).
Finally, I don’t have any grand thoughts; it’s just interesting to see different approaches to flexible electronics.