I got the notice for this special issue of NanoEthics (After the hype is before the hype) in my email this morning (April 16, 2019). Not being familiar with the journal I did a little searching.
Studies of New and Emerging Technologies Editor-in-Chief: Christopher Coenen ISSN: 1871-4757 (print version) ISSN: 1871-4765 (electronic version) Journal no. 11569
…
Provides a needed forum for informed discussion of ethical and social concerns related to nanotechnology
Counterbalances fragmented, opinionated public discussion
Discussion is informed by the physical, biological and social sciences and the law
Nanoscale technologies are surrounded by both hype and fear. Optimists suggest they are desperately needed to solve problems of terrorism, global warming, clean water, land degradation and public health. Pessimists fear the loss of privacy and autonomy, “grey goo” and weapons of mass destruction, and unforeseen environmental and health risks. Concern over fair distribution of the costs and benefits of nanotechnology is also rising
Introduced in 2007, [emphasis mine] NanoEthics: Ethics for Technologies that Converge at the Nanoscale provides a needed forum for informed discussion of ethical and social concerns related to nanotechnology, and a counterbalance to fragmented popular discussion.
While the central focus of the journal is on ethical issues, discussion extends to the physical, biological and social sciences and the law. NanoEthics provides a philosophically and scientifically rigorous examination of ethical and societal considerations and policy concerns raised by nanotechnology.
Abstracted/Indexed in
Science Citation Index Expanded (SciSearch), Journal Citation Reports/Science Edition, Social Science Citation Index, Journal Citation Reports/Social Sciences Edition, SCOPUS, INSPEC, Google Scholar, AGRICOLA, Current Contents / Social & Behavioral Sciences, EBSCO Academic Search, EBSCO Book Review Digest Plus (H.W. Wilson) , EBSCO Discovery Service, EBSCO Humanities Full Text (H.W. Wilson), EBSCO Humanities International, EBSCO Humanities Source, EBSCO Nanotechnology Collection: India , EBSCO OmniFile Full Text (H.W. Wilson), EBSCO STM Source, EBSCO TOC Premier, ERIH PLUS, Ethicsweb, Expanded Academic, Gale, Gale Academic OneFile, Humanities Abstracts, Humanities Index, Materials Business File-Steels Alerts, Mechanical and Transportation Engineering Abstracts, OCLC WorldCat Discovery Service, ProQuest ABI/INFORM, ProQuest Advanced Technologies & Aerospace Database, ProQuest Business Premium Collection, ProQuest Central, ProQuest Health & Medical Collection, ProQuest Health Research Premium Collection, ProQuest Materials Science & Engineering Database, ProQuest Philosophy Database, ProQuest Science Database, ProQuest SciTech Premium Collection, ProQuest Technology Collection, ProQuest-ExLibris Primo, ProQuest-ExLibris Summon, Solid State and Superconductivity Abstracts, The Philosopher’s Index
Here’s the text from the April 16, 2019 email announcement,
Dear colleagues!
We invite papers for a special issue in the journal “NanoEthics: Studies of New and Emerging Technologies”.
AFTER THE HYPE IS BEFORE THE HYPE – FROM BIO TO NANO TO AI: WHAT CAN WE LEARN FROM PUBLIC ENGAGEMENT IN NANOSCIENCES AND NANOTECHNOLOGIES?
Since the early 2000’s, Nanosciences and nanotechnologies (NST) have been massively promoted in many parts of the world. Two things were striking about these policies: first, the hype surrounding NST; second, the prominence of public engagement–citizen dialogue, deliberation and participation–in NST discourse and policy. Nanotechnology became a laboratory for the programmatic and practical development of a range of forms of public engagement such as “upstream” and “midstream engagement”, or policy approaches that prominently integrate public engagement such as “anticipatory governance”, “real-time technology assessment”, or “responsible research and innovation”.
From bio to nano: A major reason for this noticeable rise of public engagement in NST are the food scandals and technology controversies in the late 1990’s, in particular the controversy over genetically modified organisms (GMOs). These controversies came to be seen as the result of elites’ reductionist and arrogant approach to the public. To avoid a similar public backlash against NST authorities and decision-makers in science and politics should open doors for public engagement and humble dialogue. Obviously, the public crisis around GMOs had triggered a learning process.
From nano to AI: Today, the hype surrounding NST has waned and so have concerns that nanotechnology might fall prey to a public backlash. Nothing comparable to the public backlash against GMOs ever happened to Nano. In fact, NST hardly became controversial. Meanwhile, new technology hypes pervade the public discourse. Synthetic biology, genetic editing or Artificial Intelligence (AI) are recent examples. In each case, we observe parallels to the discourses on public engagement in NST. In the case of AI, for example, prominent researchers and think tanks warn against a public backlash if policy makers and funders fail to foster public support through public engagement.
From bio to nano to AI: We suggest that social learning processes intertwined with technology hypes pervade these and other arenas of technology governance. While the GM controversy had a visible (albeit not yet fully understood) effect on the NST field, today, we ask which lessons can be drawn – and have been drawn by science policy actors – from the NST field? Where do we stand today after 20 years of public engagement in nanotechnology and other emerging technologies, and what is there to learn for the “new governance” of most recently hyped technologies such as AI?
POSSIBLE TOPICS INCLUDE:
Societal effects and social learnings of Public Engagement (PE)
– How can we conceptualize the social learning processes which seem to manifest in technology governance over the past twenty years? Have new patterns of interpretation been established regarding the nature of a successful or failed technology governance? If so, how can they be described and distinguished from the “old” patterns of interpretation?
– Does the fact that NST mostly remained uncontroversial mean that the early emphasis on public engagement in the NST field made it more “socially robust”, “democratic” and “reflexive”? Have the right “lessons” been drawn (from the past for the future)?
– Why and how does the trend toward public engagement manifest itself in different national political cultures? How did certain public engagement formats travel across national borders in the NST policy field?
PE between hype and reflexivity
– What happens after the hype? With enthusiastic/dystopian discourse subsiding, do public engagement activities also vane? What happened to the engagement hype and to ambitious policy metaphors such as “upstream engagement”? Have they been forgotten? Will they reappear, or be reinvented, with the next big techno hype?
– For the social sciences nanotechnology has provided an opportunity to step up research and policy intervention. How can the role/agency of the social sciences in public engagement processes be conceptualized? In which way has this role changed in the past 20 years? Which role conflicts or normative dilemmas arise from it?
PE between strategic and transformative uses
– Did public engagement (ever) make a difference in the governance of NST or other emerging technologies? How have public engagement initiatives been integrated (or ignored) in the governance of NST and other emerging technologies?
– Has public engagement had identifiable impacts on policies or institutions related to NST or other fields of technoscientific discourse and policy? Did public engagement have the effect of problematizing, shifting or even reshaping epistemic and political demarcation lines between the public, scientific expertise and policy subsystems? What can we expect for the future?
Several formats are available. We specifically invite original research papers. In addition, contributions can come in the form of shorter discussion notes, communications and responses, letters, art-science interactions, interviews or anecdotes, and book reviews.
Not being familiar with either of the organizers, I also searched for them online.
Franz Seifert has been an independent social scientist since 2000 according to his CV (on academia.edu). At a guess, he’s based in Austria. I found his CV quite interesting, both it and his list of publications is extensive, all of it related to the topic of the special issue.
Camilo Fautz is a member of the scientific staff at the Karlsruhe Institute of Technology (Germany) and a PhD student, if his profile page is up-to-date. He too has a number of papers on ‘relevant to the special issue’ topics listed on his profile page.
While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.
Introduction
For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),
Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.
Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.[1] The recent paper acceptance rate for SIGGRAPH has been less than 26%.[2] The submitted papers are peer-reviewed in a single-blind process.[3] There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress.[4][5] …
This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014. The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,
While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.
“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”
…
SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”
That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.
CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.
All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.
“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”
Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.
The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.
The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”
The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.
Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.
###
About ACM, ACM SIGGRAPH, and SIGGRAPH 2018
ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.
They have provided an image illustrating what they mean (I don’t find it especially informative),
Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn
Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.
Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.
“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”
For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.
SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.
“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”
This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”
Apparently this is a still from the ‘short’,
Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios
Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.
Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.
“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”
To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.
Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec
to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.
The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)
Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.
Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.
“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”
I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,
Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck
Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.
“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”
The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.
“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”
Predicting sound
Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.
Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.
“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.
The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.
Challenges ahead
In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.
And, even in its current state, the results are worth the wait.
“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”
Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.
Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.
Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,
The researchers have also provided this image,
By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)
It does seem like we’re synthesizing the world around us, eh?
SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.
The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.
Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”
He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”
Highlights from the 2018 Art Gallery include:
Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver
TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.
Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara
Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”
Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University
Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.
In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.
The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.
To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.
“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.
Art Papers highlights include:
Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth
This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.
Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong
The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.
Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University
“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.
What’s the what?
My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.
Part 1 featured my commentary on both Calestous Juma’s 2016 book, ”Innovation and Its Enemies; Why People Resist New Technologies” and Meanie Keene’s 2015 book, “Science in Wonderland; The scientific fairy tales of Victorian Britain.” Now for an emerging technology; genetically modified fish (AquAdvantage salmon) and my final comments on the books and the contrasting ways the adoption of new technologies and science is presented.
Fish
AquAdvanage salmon features as one of Calestous Juma’s contemporary emerging technologies. I mentioned the fish here in a May 20, 2016 posting when the fish was approved for consumption in Canada; this followed an earlier mention in a Dec. 4, 2015 posting when the US Food and Drug Administration (FDA) approved the salmon for consumption in the US (from the 2015 posting),
…
For the final excerpt from the December 2015 issue, there’s this about genetically engineered salmon,
Genetically Modified Salmon: Coming to a River Near You?
After nearly 20 years of effort, the Food and Drug Administration has approved genetically engineered salmon produced by AquaBounty Technologies, as fit for consumption and will not have to be labeled as genetically engineered. This salmon is capable of growing twice as fast as a non-engineered farmed salmon in as little as half of the time, however, it’s still likely to be at least two years before these salmon reach supermarkets. Some groups are concerned about the environmental implications should these salmon accidentally get released, or escape, into the wild, even though AquaBounty says its salmon will be all female and sterile.
AquaBounty’s salmon (background) has been genetically modified to grow bigger and faster than a conventional Atlantic salmon of the same age (foreground.) Courtesy of AquaBounty Technologies, Inc. [downloaded from http://www.npr.org/sections/thesalt/2015/06/24/413755699/genetically-modified-salmon-coming-to-a-river-near-you]
The link from the newsletter points to a June 24, 2015 article by Jessie Rack for US National Public Radio’s Salt on the Table program (Note: Links have been removed),
One concern repeatedly raised by critics who don’t want the FDA to give the transgenic fish the green light: What would happen if these fish got out of the land-based facilities where they’re grown and escaped into the wild? Would genetically modified salmon push out their wild counterparts or permanently alter habitat? In a review paper published this month in the journal BioScience, scientists tackle that very question.
Robert H. Devlin, a scientist at Fisheries and Oceans Canada, led a team that reviewed more than 80 studies analyzing growth, behavior and other trait differences between genetically modified and unaltered fish. The scientists used this to predict what might happen if fish with modified traits were unleashed in nature.
Genetically modified salmon contain the growth hormone gene from one fish, combined with the promoter of an antifreeze gene from another. This combination both increases and speeds up growth, so the salmon grow faster.
Altering a fish’s genes also changes other traits, the review found. Genetically modified salmon eat more food, spend more time near the surface of the water, and don’t tend to associate in groups. They develop at a dramatically faster rate, and their immune function is reduced.
But would these altered traits help genetically modified salmon outcompete wild salmon, while at the same time making them less likely to thrive in nature? It’s unclear, says Fredrik Sundström, one of the study authors and an ecologist at Uppsala University in Sweden.
You may note the lead researcher for the literature review, a Canadian scientist was not quoted. This is likely due to the muzzle the Conservative government (still in power in June 2015 ) had applied to government scientists.
One last thing about AquAdvantage salmon, there is a very good Dec. 3, 2015 posting by Meredith Hamel focusing on their Canadian connections on her BiologyBizarre blog/magazine (Note: A link has been removed),
“For the first time anywhere in the world, a genetically engineered animal has been approved for human consumption” announced Peter Mansbridge on CBC [Canadian Broadcasting Corporation] news on November 20 [2015]. Members of society do not agree on how genetically modified fruits and vegetables should be labelled, if at all, but we are already moving on to genetically modified animals for human consumption. The AquAdvantage salmon by the US company AquaBounty can grow quicker and go to market twice as fast as regular farmed salmon using less feed. This genetically engineered salmon, whose fertilized eggs are produced at an inland facility in P.E.I [Prince Edward Island], Canada [emphasis mine] and raised at a facility in Panama, has been approved by the FDA after a long 20 year wait. AquAdvantage salmon could be the first genetically engineered meat we eat but opposition to approving it in Canada shows this salmon is not yet finished swimming against the current.
She goes on to describe in detail how these salmon are created (not excerpted here) and pinpoints another Canadian connection and political ramifications (Note: Links have been removed),
Head of Ocean Sciences Department at Memorial University [province of Newfoundland and Labrador], Garth Fletcher told The Star he was happy to see his creation get approved as he didn’t think approval would happen in his lifetime. Fletcher is no longer involved with AquaBounty but began working on this growth improved transgenic fish with other scientists back in 1982. On CBC news he said “the risk is as minimal as you could ever expect to get with any product.”
While the salmon is not approved in Canada for human consumption, some grocery store chains have already boycotted AquAdvantage salmon. The first step, the production of eggs in P.E.I has been approved by the federal government. Now there is a court battle with British Columbia’s Living Oceans Society and Nova Scotia’s Ecology Action Centre together challenging the federal government’s approval. They are concerned AquAdvantage salmon would be toxic to the environment as an invasive species if they were to escape and that this was not adequately assessed. Secondly they argue that Environment Canada had a duty to inform the public but failed to do so.
Natalie Huneault at Environment Canada told the National Oberver, “there were no concerns identified to the environment or to the indirect health of Canadians due to the contained production of these GM fish eggs for export.”
Anastasia Bodnar over on Biology Fortified does an excellent job of going through the risks and mitigation of AquAdvantage salmon (here and here) both with respect to safety of eating this meat product as well as in preventing escapee transgenic fish from contaminating wild salmon populations. The Fisheries and Oceans Canada document containing assessment of risks to the environment and health are found here. Due to the containment facility and procedures there is extremely low likelihood that any fertile genetically modified salmon would escape to an area where it could survive and reproduce.
The failure of Environment Canada to properly inform and have a discussion with the public before approving the P.E.I fertilized egg production facility will certainly have increased public mistrust and fear of this genetically engineered salmon. I think that if the public feel that this step has already taken place behind their back, future discussion about approving genetically engineered salmon as safe to eat, is only going to be met with suspicion.
…
Since the 2016 approval, AquAdvantage salmon, 4.5M tonnes has been sold in Canada according to an Aug. 8, 2017 article by Sima Shakeri for Huffington Post (Note: Links have been removed),
After decades of trying to get approval by in North America, genetically modified Atlantic salmon has been sold to consumers in Canada.
AquaBounty Technologies, an American company that produces the Atlantic salmon, confirmed it had sold 4.5 tonnes of the modified fish on August 4 [2017], the Scientific American reported.
The fish have been engineered with a growth hormone gene from Chinook salmon to grow faster than regular salmon and require less food. They take about 18 months to reach market size, which is much quicker than the 30 months or so for conventional salmon.
The Washington Post wrote AquaBounty’s salmon also contains a gene from the ocean pout that makes the salmon produce the growth hormone gene all-year-round.
The company produces the eggs in a facility in P.E.I., which is currently being expanded, and then they’re shipped to Panama where the fish are raised.
Health Canada assessed the AquAdvantage salmon and concluded it “did not pose a greater risk to human health than salmon currently available on the Canadian market,” and that it would have no impact on allergies nor a difference in nutritional value compared to other farmed salmon.
Because of that, the AquAdvantage product is not required to be specially labelled as genetically modified, and is up to the discretion of retailers.
Scientific American has reproduced a piece by Emily Waltz (originally published August 4, 2017, the date Canadian consumers discovered the fish was being sold, in Nature). From the Aug. 7, 2017 Scientific American republication (Note: A link has been removed),
AquaBounty’s gruelling path from scientific discovery to market terrified others working in animal biotechnology, and almost put the company out of business on several occasions. Scientists first demonstrated the fast-growing fish in 1989. They gave it a growth-hormone gene from Chinook salmon (Oncorhynchus tshawytscha), along with genetic regulatory elements from a third species, the ocean pout (Zoarces americanus). The genetic modifications enable the salmon to produce a continuous low level of growth hormone.
AquaBounty formed around the technology in the early 1990s and approached regulators in the United States soon after. It then spent almost 25 years in regulatory limbo. The US Food and Drug Administration (FDA) approved the salmon for consumption in November 2015, and Canadian authorities came to the same decision six months later. Neither country requires the salmon to be labelled as genetically engineered.
But unlike in Canada, political battles in the United States have stalled the salmon’s entry into the marketplace. …
Activists in both the United States and Canada have demanded that regulators reconsider their decisions, and some have filed lawsuits. …
Waltz includes this quote from an interested party,
The sale of the fish follows a long, hard-fought battle to navigate regulatory systems and win consumer acceptance. “Somebody’s got to be first and I’m glad it was them and not me,” says James West, a geneticist at Vanderbilt University in Nashville, Tennessee, who co-founded AgGenetics, a start-up company in Nashville that is engineering cattle for the dairy and beef industries. “If they had failed, it might have killed the engineered livestock industry for a generation,” he says.
Canadians don’t necessarily respond in the same way that Americans do. The stem cell controversies to the south of us never reached the same fury and pitch although there were some significant impacts felt by the research community. Similarly the GMO (genetically modified organisms) controversies were felt here but in nowhere near the same degree as Europe. That doesn’t mean there won’t be problems this time but trying to determine how Canadians are likely to respond can be tricky especially when most of us don’t know much about GMO foods as Meham Abedi notes in her August 9, 2017 article for Global TV news (Note: Link have been removed),
On Wednesday [Aug. 9, 2017], an Angus Reid survey revealed that most Canadians admit they don’t know much about genetically modified organisms, but still want more transparency.
Of the 1,512 respondents, 24 per cent said they had “never heard of them” or only heard the term, 60 per cent said they “know a little bit about” GMO food, while only 16 per cent were “very familiar” with what it entails.
However, 83 per cent of Canadians surveyed said at least some GMO food labelling should [be] mandatory in grocery stores.
The report echos 2016 Health Canada findings that Canadians’ opinions on the products were defined by “confusion, misinformation, and generally low awareness/understanding.”
…
The Angus Reid survey was conducted between June 8-13, 2017 [emphasis mine], by 1,512 Canadian adults. It is considered accurate +/- 2.5 percentage points, 19 times out of 20.
It’s hard to know how “confusion, misinformation, and generally low awareness/understanding,” is going to play out but it doesn’t seem a good idea to just sneak GMO salmon into the Canadian marketplace. Notably, Juma argues for more public education in his book and while it might not smooth the path as much as he and other innovation enthusiasts might prefer, it certainly couldn’t hurt.
It might also be useful to consider the idea that not all resistance is bad and to be avoided. Tess Doezema in her April 26, 2017 article (Skepticism About Biotechnology Isn’t Anti-Science) presents a persuasive argument suggesting that public concerns don’t deserve to be dismissed (Note: Links have been removed),
…
To many in bioscience and biotechnology circles, this [AquAdvantage salmon] is a case of politics contaminating science. In an open letter to President Obama in 2014, a group of “concerned international scientists and global technology company executives” argue this point:
The American people, and indeed all people everywhere, are best served by a trusted objective regulatory process truly based on sound science, a system which can be counted upon to evaluate and act on the applications it receives without fear of political interference.
These scientists and others offer a picture of a Manichean world divided into those who are for scientific and technological progress and those who are against it—a representation of the world that we have been seeing more and more of lately in reports of a “war on science.” But drawing this line is dangerous. The real problem here is the regulatory process itself, which forces dissent to take the narrow form of challenges to scientific data and methodology and ignores other questions about what’s at stake.
The FDA approval process for the AquAdvantage salmon took longer and included more opportunities for public comment than most products the FDA reviews. This unique openness to public input was balanced by a careful parsing of what counts as scientifically and contextually relevant and what does not. The agency received 38,000 comments in response to its draft assessment alone, but it determined that just 90 were worth considering [emphases mine]. The remaining comments were discounted as irrelevant because they did not directly address the details of the regulation process, or they raised issues beyond the mandate of the agency. These disregarded comments focused on a wide range of concerns, including patenting and ownership regimes of seed and crops; how deploying genetically modified corn and soy would affect the United States’ image around the world; continuing failures of existing market configurations to address inequality and food distribution; and the long history of multinational corporations central to the commercialization of biotechnologies, such as Monsanto, intentionally obscuring the negative impacts of their chemical products and byproducts while undermining human health.
…
Some might read the vast public preoccupation with a broad set of social, political, and economic issues as the contamination of science with politics. But I would suggest that this is actually a case of the reverse problem: seemingly endless conflict around the AquAdvantage salmon reflects the limitation of using narrow scientific terms to address questions of broad social, political, and economic significance. As things stand, the only legitimate way to engage in debates about the entry of the AquAdvantage salmon and other genetically modified organisms into our environments, meals, intellectual property regimes, and beyond is to contest its approval at the level of regulatory science. When the system asks the public to limit objections to narrow technical concerns, it undermines regulatory legitimacy and stultifies democratic debate—and perhaps most importantly, it contributes to the problematic discourse around science itself. When our modes of public deliberation strictly define what counts as a legitimate view on these issues, we end up portraying a good portion of the population as “against science,” when that in fact could not be further from the truth.
…
To position science on one side of these debates is not only patently false but detrimental to public discourse.
… Synthetic biology is billed as having the potential to transform the world in a way that will disrupt prevailing economic and geopolitical paradigms and “reshape the very fabric of life.” The one thing both sides of the fishy debate seem to agree on is that the AquAdvantage salmon is a “pioneer” technology, and what happens to this fish could set the stage for the role that biotechnology will play in our food system in the century to come. As one commentator opined for the New York Times:
We should all be rooting for the agency to do the right thing and approve the AquAdvantage salmon. It’s a healthy and relatively cheap food source that, as global demand for fish increases, can take some pressure off our wild fish stocks. But most important, a rejection will have a chilling effect on biotechnological innovation in this country. …
This framing suggests that biotechnological innovation is a necessary and unmitigated good. But for many, the prospect of a world radically altered by biotechnology conjures past experiences in which scientific “progress” didn’t go as planned—like the devastation and political instability ushered in by nuclear weapons. Similarly, to some, a dam looks like progress, development, and economic prosperity. But to others, it looks like the violent end of a way of life, heralded by the destruction of ecosystems and entire species.
…
Characterizing legitimate concerns about what kinds of technologies enter and help shape our world as “anti-science” is more likely to alienate than inspire “everyday Americans to identify with this vision of what science can do, and to believe in it.”
… perhaps we can make it productive in one way. Understanding the limitations of the process can help us think critically about how decision-making about synthetic biology going forward might be more open to a broader set of concerns and voices much earlier in the innovation process. The way forward is not drawing battle lines between those who are “for” or “against” science and closing down regulatory processes to all but the narrowest risk-based considerations. Rather, we should be forming and expanding spaces for a wide range of participants in creatively considering how to solve society’s biggest challenges. We need new ways of thinking and talking about technological promise and possibility in the world that we live in. [emphasis mine]
While Doersma is appealing to a US audience, her argument could be used internationally.
Final comments
Juma’s “Enemies of Innovation” and Keene’s “Science in Wonderland” are both worthwhile reads but it should be noted that Juma’s is the more ambitious. Keene is looking back and expanding the perspective in an area of previously mined children’s literature which hints at possible implications for our own time period..
For example, I think contemporary audiences might want to consider how much science, technology, and mathematics finds its way into our ‘fairy tales’ or super hero, space adventure, cartoons,, and other popular stories of today. Iron Man and his colleagues in one of the Avengers’ movies faced off with a robot/artificial intelligence entity, Ultron, suggesting potential existential risk; Star Trek’s impact on today’s technologies is widely acknowledged, and The Simpsons , a US animated programme, regularly embeds mathematics in its stories.
Juma examines history while attempting to extrapolate lessons for the future.It’s a courageous and worthwhile effort. While I’m not entirely comfortable with his top-down approach he knits together a comprehensive programme for policy makers and makes two point that I believe are too often overlooked, more agility is needed and these are global issues.
There’s more than one way to approach the introduction of emerging technologies and sciences to ‘the public’. Calestous Juma in his 2016 book, ”Innovation and Its Enemies; Why People Resist New Technologies” takes a direct approach, as can be seen from the title while Melanie Keene’s 2015 book, “Science in Wonderland; The Scientific Fairy Tales of Victorian Britain” presents a more fantastical one. The fish in the headline tie together, thematically and tenuously, both books with a real life situation.
Innovation and Its Enemies
Calestous Juma, the author of “Innovation and Its Enemies” has impressive credentials,
Professor of the Practice of International Development,
Director of the Science, Technology, and Globalization Project at Harvard Kennedy School’s Better Science and International Affairs,
Founding Director of the African Centre for Technology Studies in Nairobi (Kenya),
Fellow of the Royal Society of London, and
Foreign Associate of the US National Academy of Sciences.
Even better, Juma is an excellent storyteller perhaps too much so for a book which presents a series of science and technology adoption case histories. (Given the range of historical time periods, geography, and the innovations themselves, he always has to stop short.) The breadth is breathtaking and Juma manages with aplomb. For example, the innovations covered include: coffee, electricity, mechanical refrigeration, margarine, recorded sound, farm mechanization, and the printing press. He also covers two recently emerging technologies/innovations: transgenic crops and AquAdvantage salmon (more about the salmon later).
Juma provides an analysis of the various ways in which the public and institutions panic over innovation and goes on to offer solutions. He also injects a subtle note of humour from time to time. Here’s how Juma describes various countries’ response to risks and benefits,
In the United States products are safe until proven risky.
In France products are risky until proven safe.
In the United Kingdom products are risky even when proven safe.
In India products are safe when proven risky.
In Canada products are neither safe nor risky.
In Japan products are either safe or risky.
In Brazil products are both safe and risky.
In sub-Saharan Africa products are risky even if they do not exist. (pp. 4-5)
To Calestous Juma, thank you for mentioning Canada and for so aptly describing the quintessentially Canadian approach to not just products and innovation but to life itself, ‘we just don’t know; it could be this or it could be that or it could be something entirely different; we just don’t know and probably will never know.’.
One of the aspects that I most appreciated in this book was the broadening of the geographical perspective on innovation and emerging technologies to include the Middle East, China, and other regions/countries. As I’ve noted in past postings, much of the discussion here in Canada is Eurocentric and/or UScentric. For example, the Council of Canadian Academies which conducts assessments of various science questions at the request of Canadian and regional governments routinely fills the ‘international’ slot(s) for their expert panels with academics from Europe (mostly Great Britain) and/or the US (or sometimes from Australia and/or New Zealand).
A good example of Juma’s expanded perspective on emerging technology is offered in Art Carden’s July 7, 2017 book review for Forbes.com (Note: A link has been removed),
In the chapter on coffee, Juma discusses how Middle Eastern and European societies resisted the beverage and, in particular, worked to shut down coffeehouses. Islamic jurists debated whether the kick from coffee is the same as intoxication and therefore something to be prohibited. Appealing to “the principle of original permissibility — al-ibaha, al-asliya — under which products were considered acceptable until expressly outlawed,” the fifteenth-century jurist Muhamad al-Dhabani issued several fatwas in support of keeping coffee legal.
This wasn’t the last word on coffee, which was banned and permitted and banned and permitted and banned and permitted in various places over time. Some rulers were skeptical of coffee because it was brewed and consumed in public coffeehouses — places where people could indulge in vices like gambling and tobacco use or perhaps exchange unorthodox ideas that were a threat to their power. It seems absurd in retrospect, but political control of all things coffee is no laughing matter.
The bans extended to Europe, where coffee threatened beverages like tea, wine, and beer. Predictably, and all in the name of public safety (of course!), European governments with the counsel of experts like brewers, vintners, and the British East India Tea Company regulated coffee importation and consumption. The list of affected interest groups is long, as is the list of meddlesome governments. Charles II of England would issue A Proclamation for the Suppression of Coffee Houses in 1675. Sweden prohibited coffee imports on five separate occasions between 1756 and 1817. In the late seventeenth century, France required that all coffee be imported through Marseilles so that it could be more easily monopolized and taxed.
Carden who teaches economics at Stanford University (California, US) focuses on issues of individual liberty and the rule of law with regards to innovation. I can appreciate the need to focus tightly when you have a limited word count but Carden could have a spared a few words to do more justice to Juma’s comprehensive and focused work.
At the risk of being accused of the fault I’ve attributed to Carden, I must mention the printing press chapter. While it was good to see a history of the printing press and attendant social upheavals noting its impact and discovery in regions other than Europe; it was shocking to someone educated in Canada to find Marshall McLuhan entirely ignored. Even now, I believe it’s virtually impossible to discuss the printing press as a technology, in Canada anyway, without mentioning our ‘communications god’ Marshall McLuhan and his 1962 book, The Gutenberg Galaxy.
Getting back to Juma’s book, his breadth and depth of knowledge, history, and geography is packaged in a relatively succinct 316 pp. As a writer, I admire his ability to distill the salient points and to devote chapters on two emerging technologies. It’s notoriously difficult to write about a currently emerging technology and Juma even managed to include a reference published only months (in early 2016) before “Innovation and its enemires” was published in July 2016.
Irrespective of Marshall McLuhan, I feel there are a few flaws. The book is intended for policy makers and industry (lobbyists, anyone?), he reaffirms (in academia, industry, government) a tendency toward a top-down approach to eliminating resistance. From Juma’s perspective, there needs to be better science education because no one who is properly informed should have any objections to an emerging/new technology. Juma never considers the possibility that resistance to a new technology might be a reasonable response. As well, while there was some mention of corporate resistance to new technologies which might threaten profits and revenue, Juma didn’t spare any comments about how corporate sovereignty and/or intellectual property issues are used to stifle innovation and quite successfully, by the way.
My concerns aside, testimony to the book’s worth is Carden’s review almost a year after publication. As well, Sir Peter Gluckman, Chief Science Advisor to the federal government of New Zealand, mentions Juma’s book in his January 16, 2017 talk, Science Advice in a Troubled World, for the Canadian Science Policy Centre.
Science in Wonderland
Melanie Keene’s 2015 book, “Science in Wonderland; The scientific fairy tales of Victorian Britain” provides an overview of the fashion for writing and reading scientific and mathematical fairy tales and, inadvertently, provides an overview of a public education programme,
A fairy queen (Victoria) sat on the throne of Victoria’s Britain, and she presided over a fairy tale age. The nineteenth century witnessed an unprecedented interest in fairies and in their tales, as they were used as an enchanted mirror in which to reflection question, and distort contemporary society.30 … Fairies could be found disporting themselves thought the century on stage and page, in picture and print, from local haunts to global transports. There were myriad ways in which authors, painters, illustrators, advertisers, pantomime performers, singers, and more, capture this contemporary enthusiasm and engaged with fairyland and folklore; books, exhibitions, and images for children were one of the most significant. (p. 13)
…
… Anthropologists even made fairies the subject of scientific analysis, as ‘fairyology’ determined whether fairies should be part of natural history or part of supernatural lore; just on aspect of the revival of interest in folklore. Was there a tribe of fairy creatures somewhere out thee waiting to be discovered, across the globe of in the fossil record? Were fairies some kind of folks memory of any extinct race? (p. 14)
…
Scientific engagements with fairyland was widespread, and not just as an attractive means of packaging new facts for Victorian children.42 … The fairy tales of science had an important role to play in conceiving of new scientific disciplines; in celebrating new discoveries; in criticizing lofty ambitions; in inculcating habits of mind and body; in inspiring wonder; in positing future directions; and in the consideration of what the sciences were, and should be. A close reading of these tales provides a more sophisticated understanding of the content and status of the Victorian sciences; they give insights into what these new scientific disciplines were trying to do; how they were trying to cement a certain place in the world; and how they hoped to recruit and train new participants. (p. 18)
Segue: Should you be inclined to believe that society has moved on from fairies; it is possible to become a certified fairyologist (check out the fairyologist.com website).
“Science in Wonderland,” the title being a reference to Lewis Carroll’s Alice, was marketed quite differently than “innovation and its enemies”. There is no description of the author, as is the protocol in academic tomes, so here’s more from her webpage on the University of Cambridge (Homerton College) website,
Role:
Fellow, Graduate Tutor, Director of Studies for History and Philosophy of Science
Getting back to Keene’s book, she makes the point that the fairy tales were based on science and integrated scientific terminology in imaginative ways although some books with more success than other others. Topics ranged from paleontology, botany, and astronomy to microscopy and more.
This book provides a contrast to Juma’s direct focus on policy makers with its overview of the fairy narratives. Keene is primarily interested in children but her book casts a wider net “… they give insights into what these new scientific disciplines were trying to do; how they were trying to cement a certain place in the world; and how they hoped to recruit and train new participants.”
In a sense both authors are describing how technologies are introduced and integrated into society. Keene provides a view that must seem almost halcyon for many contemporary innovation enthusiasts. As her topic area is children’s literature any resistance she notes is primarily literary invoking a debate about whether or not science was killing imagination and whimsy.
It would probably help if you’d taken a course in children’s literature of the 19th century before reading Keene’s book is written . Even if you haven’t taken a course, it’s still quite accessible, although I was left wondering about ‘Alice in Wonderland’ and its relationship to mathematics (see Melanie Bayley’s December 16, 2009 story for the New Scientist for a detailed rundown).
As an added bonus, fairy tale illustrations are included throughout the book along with a section of higher quality reproductions.
One of the unexpected delights of Keene’s book was the section on L. Frank Baum and his electricity fairy tale, “The Master Key.” She stretches to include “The Wizard of Oz,” which doesn’t really fit but I can’t see how she could avoid mentioning Baum’s most famous creation. There’s also a surprising (to me) focus on water, which when it’s paired with the interest in microscopy makes sense. Keene isn’t the only one who has to stretch to make things fit into her narrative and so from water I move onto fish bringing me back to one of Juma’s emerging technologies
I have three news bits about legal issues that are arising as a consequence of emerging technologies.
Deep neural networks, art, and copyright
Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka
Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,
In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”
With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.
Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.
For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.
These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.
DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.
Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.
The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.
Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.
The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.
DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.
Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.
Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.
Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.
Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.
The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.
In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.
DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.
The Fifth Annual Conference on Governance of Emerging Technologies:
Law, Policy and Ethics held at the new
Beus Center for Law & Society in Phoenix, AZ
May 17-19, 2017!
Call for Abstracts – Now Closed
The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.
Keynote Speakers:
Gillian Hadfield, Richard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law
Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan
Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence
Craig Shank,Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)
Plenary Panels:
Innovation – Responsible and/or Permissionless
Ellen-Marie Forsberg,Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences
Adam Thierer,Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University
Andrew Maynard,Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University
Gary Marchant,Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University
Anupam Chander,Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law
Pilar Ossorio,Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence
George Poste,Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University
Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge
Responsible Development of AI
Spring Berman,Ira A. Fulton Schools of Engineering, Arizona State University
John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems
Subbarao Kambhampati,Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University
Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics
*Current Student / ASU Law Alumni Registration: $50.00
^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)
There you have it.
Neuro-techno future laws
I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,
New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.
The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.
Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”
Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.
Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”
The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.
International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.
Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”
Finally there’s an answer to the question: What (!!!) is the fourth industrial revolution? (I took a guess [wrongish] in my Nov. 20, 2015 post about a special presentation at the 2016 World Economic Forum’s IdeasLab.)
Andrew Maynard in a Dec. 3, 2015 think piece (also called a ‘thesis’) for Nature Nanotechnology answers the question,
… an approach that focuses on combining technologies such as additive manufacturing, automation, digital services and the Internet of Things, and … is part of a growing movement towards exploiting the convergence between emerging technologies. This technological convergence is increasingly being referred to as the ‘fourth industrial revolution’, and like its predecessors, it promises to transform the ways we live and the environments we live in. (While there is no universal agreement on what constitutes an ‘industrial revolution’, proponents of the fourth industrial revolution suggest that the first involved harnessing steam power to mechanize production; the second, the use of electricity in mass production; and the third, the use of electronics and information technology to automate production.)
In anticipation of the the 2016 World Economic Forum (WEF), which has the fourth industrial revolution as its theme, Andrew explains how he sees the situation we are sliding into (from Andrew Maynard’s think piece),
As more people get closer to gaining access to increasingly powerful converging technologies, a complex risk landscape is emerging that lies dangerously far beyond the ken of current regulations and governance frameworks. As a result, we are in danger of creating a global ‘wild west’ of technology innovation, where our good intentions may be among the first casualties.
…
There are many other examples where converging technologies are increasing the gap between what we can do and our understanding of how to do it responsibly. The convergence between robotics, nanotechnology and cognitive augmentation, for instance, and that between artificial intelligence, gene editing and maker communities both push us into uncertain territory. Yet despite the vulnerabilities inherent with fast-evolving technological capabilities that are tightly coupled, complex and poorly regulated, we lack even the beginnings of national or international conceptual frameworks to think about responsible decision-making and responsive governance.
He also lists some recommendations,
Fostering effective multi-stakeholder dialogues.
…
Encouraging actionable empathy.
…
Providing educational opportunities for current and future stakeholders.
… The good news is that, in fields such as nanotechnology and synthetic biology, we have already begun to develop the skills to do this — albeit in a small way. We now need to learn how to scale up our efforts, so that our convergence in working together to build a better future mirrors the convergence of the technologies that will help achieve this.
It’s always a pleasure to read Andrew’s work as it’s thoughtful. I was surprised (since Andrew is a physicist by training) and happy to see the recommendation for “actionable empathy.”
Although, I don’t always agree with him on this occasion I don’t have any particular disagreements but I think that including a recommendation or two to cover the certainty we will get something wrong and have to work quickly to right things would be a good idea. I’m thinking primarily of governments which are notoriously slow to respond with legislation for new developments and equally slow to change that legislation when the situation changes.
The technological environment Andrew is describing is dynamic, that is fast-moving and changing at a pace we have yet to properly conceptualize. Governments will need to change so they can respond in an agile fashion. My suggestion is:
Develop policy task forces that can be convened in hours and given the authority to respond to an immediate situation with oversight after the fact
Getting back to Andrew Maynard, you can find his think piece in its entirety via this link and citation,
It’s refreshing to be invited to a stakeholder or public engagement exercise being held by the Canadian government, which usually conducts these exercises as clandestine operations.
In this case, the Canada’s National Research Council (NRC) is inviting participation in something they’ve called, a Game-Changing Technologies Initiative. Here’s more from a Jan. 26, 2015 e-mail announcement,
NRC is undertaking a Game-Changing Technologies initiative that aims to identify technology areas that have the potential for revolutionary impacts on Canadian prosperity and the lives of Canadian citizens over the next 20 to 30 years.
As Canada’s Research and Technology Organization (RTO), NRC works with clients and partners to provide innovation support, strategic research, and scientific and technical services to develop and deploy solutions to meet Canada’s current and future industrial and societal needs.
Through a process that started six months ago, NRC has identified challenges and opportunities critical to Canada’s future, in which technology can play a defining role. The next step is to refine our understanding of these challenges and opportunities through a web-based exercise, in order to select opportunities that will be further developed.
This on-line exercise will seek insights from a diverse range of thought-leaders from industry, academia, government, innovation agencies, social groups and non-profit organizations across Canada. [emphasis mine] Outcomes of this initiative will help shape NRC’s investment strategy in emerging technologies, and identify key players with whom strategic partnerships are critical for success.
I also invite you to forward this invitation to members of your organization or your expert network who might want to contribute. External feedback is critical to help NRC identify game-changing technologies with the potential to improve Canada’s future competitiveness, productivity and quality of life.
Should you or a member of your team have any questions about this initiative, please do not hesitate to contact Dr. Carl Caron at: Carl.Caron@nrc-cnrc.gc.ca or 613-990-7381. We look forward to your participation.
For those who prefer or would like to test their abilities in French,
Le CNRC lance une Initiative sur les technologies révolutionnaires qui a pour objet de répertorier les technologies ayant le potentiel de révolutionner de nombreux secteurs de l’économie canadienne et la vie des Canadiens d’ici 20 à 30 ans.
En sa qualité d’organisation de recherche et de technologie (ORT) du Canada, le CNRC collabore avec ses clients et partenaires pour soutenir l’innovation et la recherche stratégique, et offre des services scientifiques et techniques de nature à favoriser le développement et la mise en œuvre de solutions susceptibles de répondre aux besoins industriels et sociaux actuels et futurs du Canada.
Dans le cadre d’un processus amorcé il y a six mois, le CNRC a répertorié plusieurs possibilités et défis cruciaux pour l’avenir du Canada et où la technologie pourrait jouer un rôle déterminant. L’étape suivante consistera à améliorer notre compréhension de ces défis et possibilités grâce à un exercice en ligne qui nous permettra de sélectionner les possibilités qui seront approfondies.
Cet exercice en ligne vise à recueillir le point de vue d’un large éventail de chefs de file de l’industrie, des milieux universitaires, de l’administration publique, d’organismes d’innovation, de groupes sociaux et d’organisations sans but lucratif de partout au Canada. Les résultats de l’initiative inspireront la stratégie d’investissement du CNRC dans les technologies émergentes et permettront d’identifier les acteurs clés avec qui il est fondamental d’établir des partenariats stratégiques pour connaître du succès.
Je vous serais également reconnaissant de transmettre cette invitation aux membres de votre organisation ou de votre réseau d’experts qui pourraient souhaiter apporter leur contribution. Les commentaires de parties externes au CNRC sont cruciaux pour aider notre organisation à déterminer quelles sont les technologies révolutionnaires ayant le potentiel d’accroître la compétitivité, la productivité et la qualité de vie au Canada.
Si vous ou un membre de votre équipe avez des questions à propos de cette initiative, n’hésitez pas à communiquer avec M. Carl Caron par courriel à l’adresse Carl.Caron@nrc-cnrc.gc.ca ou par téléphone au 613-990-7381.
En espérant que vous accepterez de participer à cet exercice, je vous prie de recevoir, Madame, mes salutations les meilleures.
I was unable to find out more about this initiative from sources outside the NRC despite several searches. However, the initiative’s website does provide some information although you have to scroll to the bottom of the page to find options you can click on. A backgrounder is provided (click on About) which offers some additional detail,
Over the last six months, NRC has worked closely with internal and external stakeholders to identify key opportunities and challenges facing Canada over the next two decades, and game-changing technologies that offer potential solutions. Through this process, NRC has already identified and is now working with stakeholders to build programs that address technology opportunities for advanced manufacturing (the Factory of the Future) and the development of Canada’s Arctic resources and communities (NRC Arctic Program). Through this process, NRC has also identified seven additional game-changing opportunities:
Next generation health care systems
Maintaining quality of life for an aging population
A safe, sustainable and profitable food industry
Protecting Canadian security and privacy
Transforming the classroom for continuous and adaptive learning
The cities of the future
Prosperous and sustainable rural and remote communities
This list is not comprehensive but is intended to stimulate discussion with our employees and a diverse range of thought-leaders by means of a web-based platform
There are instructions for eager beavers who want to prepare ahead of time. I was particularly struck by this passage,
We ask participants to:
Participate by volunteering your ideas, sharing your knowledge and engaging in discussions with other participants.
Speak freely and respect each other’s input.
Keep your responses and discussions polite and respectful. Any responses containing language or opinions that the moderators deem to be offensive will be deleted and the participant will be informed of the deletion.
Keep jargon to a minimum
I see two instances of the word ‘respect’ (in one form or another) and one instance of the word ‘polite’ which makes the organizers seem a little nervous. Fair enough. This appears to be the NRC’s first foray into a general, online, participatory exercise and tales of bad behaviour online are legion so a little apprehension is understandable.
They’ve also included a methodology for which I offer profound thanks as it helps to place this initiative into perspective,
Step 1: Meta-scan– Secondary Research
To identify global technological opportunities and challenges across all sectors of the economy and facets of society, NRC conducted a comprehensive Meta-scan (or scan of scans), surveying over 150 recent publications that predict future economic, social, and political developments from around the world. This process enabled NRC to acquire insights on a large number of megatrends or major shifts in the long-term outlook.
Step 2: Canadian Game-Changing Technologies Foresight Workshop
In May 2014, leveraging this initial research, NRC hosted a facilitated workshop with select internal and external stakeholders to further explore global challenges and trends based on their potential for impact on Canada, within a 20 – 30 year timeframe. This activity identified a number of themes, including: health care; environmental change; education; energy demand and security; and water usage, quality and security.
Step 3: Interviews with Thought Leaders
NRC also conducted interviews with selected thought leaders and experts across Canada to seek perspectives on global trends and issues and identify areas critical to Canada’s future in which game-changers can play a defining role. Interviews were conducted with representatives from diverse horizons and regions, including industry, other Federal and provincial government departments, academia and social groups. This led to the identification of dozens of potential opportunities in many areas, as well as pervasive technology platforms.
Step 4: Analysis and Distillation of Opportunities and Challenges
NRC conducted an analysis that organized and synthesized all data gathered to date; and identified seven opportunities for further assessment. The first stage was a facilitated process that drew together internal and external participants to analyse the Meta-scan data. During this process the opportunities were classified into 70 potential opportunities which were then bundled and refined for a second filtering process which looked at commonalities among opportunities. Those opportunities were bundled into a series of themes, resulting in 13 cross-cutting opportunities and challenges. Further consolidation and the removal of opportunities/challenges that NRC already identified and is working with stakeholders to build programs, resulted in the current list of seven.
Step 5: Development of Opportunity Descriptions
The seven opportunities and challenges were crafted into high-level, one-page descriptions that include a short “Scenario Vignette” focussing on key issues that are meant to be reflective of particular social, economic or industrial aspects of a possible Canadian future state, set approximately 20 to 30 years in the future.
Step 6: Stakeholder Engagement
Drawing on the seven opportunities emerging from the analysis, NRC is now inviting stakeholders to participate in a facilitated online eng agement to share ideas and refine its understanding of these opportunities.
Step 7: Selection of Opportunities
Following the stakeholder engagement, NRC will analyse the participant feedback and select a number of opportunities to be further developed in collaboration with potential partners
It seems that despite the contents of my invitation, I’m not a ‘thought leader’ but an ‘afterthought leader’ arriving on the scene at stage six in a seven-stage process. (One comment to the organizers, your willingness to include a broad swath of individuals/stakeholders is much appreciated although you might want to take a little more care with your messaging, especially with regard to the term ‘thought leader’.) But, I’m not bitter (points to anyone who recognized the nod to an old, oft-repeated Royal Canadian Air Farce bit about former Member of Parliament John Nunziata) just very happy they’re trying a more open approach. This approach is more in keeping with what I’ve seen practised in other jurisdictions.
In any event, I expect to participate in this initiative which extends from Monday, Feb. 9, 2015 to Friday, Feb. 20, 2015. Make special note that full access to the material on the platform is available on Feb. 9, 2015 only. This initiative is open to both Canadians and citizens of other countries.
I looked up two names associated with this Game-Changing Technologies Initiative. Carl Caron, the contact mentioned in the notice, can be found here on LinkedIn. Briefly, his job title is listed as Director General, Strategy and Development Branch, National Research Council, a job he has held since Aug. 2011. He is a political scientist by training and I believe he has a PhD. The second person, Danial D.M. Wayner is Vice-President, Emerging Technologies, Vice-président, Technologies émergentes. His biography page can be found here on the Canada National Research Council website. Dr. Wayner has held his current position since Jan. 2010. He is a chemist by training.
According to an Oct. 30, 2013 news release from the Taylor & Francis Group, there’s a new journal being launched, which is good news for anyone looking to get their research or creative work (which retains scholarly integrity) published in a journal focused on emerging technologies and innovation,
Journal of Responsible Innovation will focus on intersections of ethics, societal outcomes, and new technologies: New to Routledge for 2014 [Note: Routledge is a Taylor & Francis Group brand]
Scholars and practitioners in the emerging interdisciplinary field known as “responsible innovation” now have a new place to publish their work. The Journal of Responsible Innovation (JRI) will offer an opportunity to articulate, strengthen, and critique perspectives about the role of responsibility in the research and development process. JRI will also provide a forum for discussions of ethical, social and governance issues that arise in a society that places a great emphasis on innovation.
Professor David Guston, director of the Center for Nanotechnology in Society at Arizona State University and co-director of the Consortium for Science, Policy and Outcomes, is the journal’s founding editor-in-chief. [emphasis mine] The Journal will publish three issues each year, beginning in early 2014.
“Responsible innovation isn’t necessarily a new concept, but a research community is forming and we’re starting to get real traction in the policy world,” says Guston. “It is our hope that the journal will help solidify what responsible innovation can mean in both academic and industrial laboratories as well as in governments.”
“Taylor & Francis have been working with the scholarly community for over two centuries and over the past 20 years, we have launched more new journals than any other publisher, all offering peer-reviewed, cutting-edge research,” adds Editorial Director Richard Steele. “We are proud to be working with David Guston and colleagues to create a lively forum in which to publish and debate research on responsible technological innovation.”
An emerging and interdisciplinary field
The term “responsible innovation” is often associated with emerging technologies—for example, nanotechnology, synthetic biology, geoengineering, and artificial intelligence—due to their uncertain but potentially revolutionary influence on society. [emphasis mine] Responsible innovation represents an attempt to think through the ethical and social complexities of these technologies before they become mainstream. And due to the broad impacts these technologies may have, responsible innovation often involves people working in a variety of roles in the innovation process.
Bearing this interdisciplinarity in mind, the Journal of Responsible Innovation (JRI) will publish not only traditional journal articles and research reports, but also reviews and perspectives on current political, technical, and cultural events. JRI will publish authors from the social sciences and the natural sciences, from ethics and engineering, and from law, design, business, and other fields. It especially hopes to see collaborations across these fields, as well.
“We want JRI to help organize a research network focused around complex societal questions,” Guston says. “Work in this area has tended to be scattered across many journals and disciplines. We’d like to bring those perspectives together and start sharing our research more effectively.”
Now accepting manuscripts
JRI is now soliciting submissions from scholars and practitioners interested in research questions and public issues related to responsible innovation. [emphasis mine] The journal seeks traditional research articles; perspectives or reviews containing opinion or critique of timely issues; and pedagogical approaches to teaching and learning responsible innovation. More information about the journal and the submission process can be found at www.tandfonline.com/tjri.
…
About The Center for Nanotechnology in Society at ASU
The Center for Nanotechnology in Society at ASU (CNS-ASU) is the world’s largest center on the societal aspects of nanotechnology. CNS-ASU develops programs that integrate academic and societal concerns in order to better understand how to govern new technologies, from their birth in the laboratory to their entrance into the mainstream.
…
—————————————–
About Taylor & Francis Group
—————————————–
Taylor & Francis Group partners with researchers, scholarly societies, universities and libraries worldwide to bring knowledge to life. As one of the world’s leading publishers of scholarly journals, books, ebooks and reference works our content spans all areas of Humanities, Social Sciences, Behavioural Sciences, Science, and Technology and Medicine.
From our network of offices in Oxford, New York, Philadelphia, Boca Raton, Boston, Melbourne, Singapore, Beijing, Tokyo, Stockholm, New Delhi and Johannesburg, Taylor & Francis staff provide local expertise and support to our editors, societies and authors and tailored, efficient customer service to our library colleagues.
You can find out more about the Journal of Responsible Innovation here, including information for would-be contributors,
JRI invites three kinds of written contributions: research articles of 6,000 to 10,000 words in length, inclusive of notes and references, that communicate original theoretical or empirical investigations; perspectives of approximately 2,000 words in length that communicate opinions, summaries, or reviews of timely issues, publications, cultural or social events, or other activities; and pedagogy, communicating in appropriate length experience in or studies of teaching, training, and learning related to responsible innovation in formal (e.g., classroom) and informal (e.g., museum) environments.
JRI is open to alternative styles or genres of writing beyond the traditional research paper or report, including creative or narrative nonfiction, dialogue, and first-person accounts, provided that scholarly completeness and integrity are retained.[emphases mine] As the journal’s online environment evolves, JRI intends to invite other kinds of contributions that could include photo-essays, videos, etc. [emphasis mine]
I like to check out the editorial board for these things (from the JRI’s Editorial board webpage; Note: Links have been removed),,
Editor-in-Chief
David. H. Guston , Arizona State University, USA
Associate Editors
Erik Fisher , Arizona State University, USA
Armin Grunwald , ITAS , Karlsruhe Institute of Technology, Germany
Richard Owen , University of Exeter, UK
Tsjalling Swierstra , Maastricht University, the Netherlands
Simone van der Burg, University of Twente, the Netherlands
Editorial Board
Wiebe Bijker , University of Maastricht, the Netherlands
Francesca Cavallaro, Fundacion Tecnalia Research & Innovation, Spain
Heather Douglas , University of Waterloo, Canada
Weiwen Duan , Chinese Academy of Social Sciences, China
Ulrike Felt, University of Vienna, Austria
Philippe Goujon , University of Namur, Belgium
Jonathan Hankins , Bassetti Foundation, Italy
Aharon Hauptman , University of Tel Aviv, Israel
Rachelle Hollander , National Academy of Engineering, USA
Maja Horst , University of Copenhagen, Denmark
Noela Invernizzi , Federal University of Parana, Brazil
Julian Kinderlerer , University of Cape Town, South Africa
Ralf Lindner , Frauenhofer Institut, Germany
Philip Macnaghten , Durham University, UK
Andrew Maynard , University of Michigan, USA
Carl Mitcham , Colorado School of Mines, USA
Sachin Chaturvedi , Research and Information System for Developing Countries, India
René von Schomberg, European Commission, Belgium
Doris Schroeder , University of Central Lancashire, UK
Kevin Urama , African Technology Policy Studies Network, Kenya
Frank Vanclay , University of Groningen, the Netherlands
Jeroen van den Hoven, Technical University, Delft, the Netherlands
Fern Wickson , Genok Center for Biosafety, Norway
Go Yoshizawa , Osaka University, Japan
Good luck to the publishers and to those of you who will be making submissions. As for anyone who may be as curious as I was about the connection between Routledge and Francis & Taylor, go here and scroll down about 75% of the page (briefly, Routledge is a brand).
Tim Harper, Hailing Yu, and Martin Jordonov of Cientifca (a global consulting company on nano and other emerging technologies) have released a new report, Using Emerging Technologies to Address Global Risks. A compact 28 pp, the report provides good context for understanding some of the difficult issues, overpopulation and environmental degradation, facing us. It’s also a well reasoned and thoughtful position paper on further developing emerging technologies with the aim of solving environmental problems. It is oriented to the business end of nanotechnology as becomes clear at about page 18.
I did raise my eyebrows when the authors claimed that despite the fact that the banking industry is “one of the most regulated and supervised sectors in the world of commerce” that economic chaos has occurred in an argument against ‘too’ many regulations for emerging technologies (1st para., p. 23).
This difference of opinion may lie in geography. From my perspective here in Canada, one of the major problems besetting the US economy, which affects Canadians greatly, was the financial chaos eventually caused by lifting of many of their banking regulations in the early 2000’s. Personally, I think there was an imbalance. No regulation and lack of oversight in some areas and far too much regulation and red tape in others. (I came across the US Sarbanes-Oxley requirements in a couple of articles I wrote on content management. I don’t remember much other than the requirements for tagging, managing, and tracking data were crushing and it was specific to financial services.)
However, I do agree with the authors that government agencies and policymakers do tend to view regulations as a solution to many of
life’s problems especially when something goes wrong and the attitude seems to be, the more regulation the better. Getting back to my original comment about regulatory balance, I wouldn’t assume despite the authors’ claims that because a few companies are good citizens (the authors list an example) that the majority will follow suit. Consequently, I think some regulations and oversight need to be in place.
As nanotechnology and life sciences are poised to be as influential as oil and chemicals were to the early 20th century, and the global population becomes interconnected in a way undreamt of by even the best science fiction writers, our relationship with technology will change at a rapid pace. The difficulty that both policy makers and the general public have with technology from a lack of knowledge and a lack of control. (p. 24)
I quite agree with the authors here but I don’t understand what they mean by control in light of their earlier assertions regarding regulations. They never really describe what they mean by control.
What I particularly appreciate in this report is the way the authors weave together some of the great issues facing us environmentally and economically while suggesting that it’s possible to remedy these situations.
(I wish I could quote one or two more passages from the report, unfortunately, the copy feature is locked, which means more typing or keyboarding.)
ETA Oct. 5, 2011: I want to commend the authors for their inclusion of the internet and social media and their impact on emerging technologies, business, and global risks in their discussion.
I find there’s a general tendency to view social media and the internet purely as a business opportunity, a means of fomenting social revolution, hurting brains, etc. on the one side. Or it’s simply ignored while discussions rage about environmental degradation, risks of emerging technologies, etc. I’m glad to see the authors have put the internet and social media (which are emerging technologies themselves) into the context of the discussion about other emerging technologies (nanotechnology, robots, synthetic biology, etc.) and global risks.
Later this week (Feb. 3 & 4, 2011), an imaginative discussion about society, emerging technologies, and the role of government, Here Be Dragons: Governing a Technologically Uncertain Future, will take place at Google’s Washington, DC, headquarters. The event (one of a series dubbed ‘Future Tense’) is the result of a partnership between Arizona State University, the New America Foundation, and Slate magazine. Not surprisingly Slate has an article about the event but it’s written by Robert J. Sawyer, a Canadian science fiction novelist and it’s not about the event per se. From the Slate article, The Purpose of Science Fiction; How it teaches governments—and citizens—how to understand the future of technology,
… science-fiction writers explore these issues in ways that working scientists simply can’t. Some years ago, for a documentary for Discovery Channel Canada, I interviewed neurobiologist Joe Tsien, who had created superintelligent mice in his lab at Princeton—something he freely spoke about when the cameras were off. But as soon as we started rolling, and I asked him about the creation of smarter mice, he made a “cut” gesture. “We can talk about the mice having better memories but not about them being smarter. The public will be all over me if they think we’re making animals more intelligent.”
But science-fiction writers do get to talk about the real meaning of research. We’re not beholden to skittish funding bodies and so are free to speculate about the full range of impacts that new technologies might have—not just the upsides but the downsides, too. And we always look at the human impact rather than couching research in vague, nonthreatening terms.
That bit about ‘smarter mice’ is related to the issue I was discussing in regard to PBS’s Nova Series: Making Stuff and their approach to transgenic goats (my Jan. 21, 2011 posting). Many people are distressed by this notion of crossing boundaries and ‘playing God’ to the point where discussion is rendered difficult if not impossible.The ‘smarter mice’ issue points to a related problem in that people find some boundaries more acceptable to cross than others.
Sawyer’s point about science fiction being a means of holding the discussion is well taken. He will be presenting at this week’s ‘Dragons’ event. Here’s more about it,
Maps in the old days often included depictions of sea dragons or lions to connote unknown or dangerous terrain. Unfortunately, when it comes to a future that will be altered in unimaginable ways by emerging technologies, society and government cannot simply lay down a “Here Be Dragons” marker with a fanciful illustration to signal that most of us have no clue.
How does a democratic society both nurture and regulate — and find the right balance between those two imperatives — fast-evolving technologies poised to radically alter life?
Synthetic biology, with its potential to engineer and manipulate living organisms, and the Internet, which continues to alter how we live and relate to each other, offer two compelling cases in point.
Future Tense is convening at Google DC a number of leading scientists, Internet thinkers, governance experts and science fiction writers to grapple with the challenge of governing an unchartered future.
Related but tangential: The Canadian Army has shown an interest in science fiction as they have commissioned at least two novels by Karl Schroeder as I noted in my Feb. 16, 2009 posting.
One last thought, I am curious about the fact that the ‘Dragons’ event is being held at a Google headquarters yet Google is not a sponsor, a host, or a partner.