Tag Archives: games

SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) and their art gallery from Aug. 12 – 16, 2018 (making the world synthetic) in Vancouver (Canada)

While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.

Introduction

For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),

Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.

Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.[1] The recent paper acceptance rate for SIGGRAPH has been less than 26%.[2] The submitted papers are peer-reviewed in a single-blind process.[3] There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress.[4][5] …

This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014.  The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,

While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.

“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”

SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”

That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.

Previews

I have four items about work that will be presented at SIGGRAPH 2018, First up, something about ‘redirected walking’ from a June 18, 2018 Association for Computing Machinery news release on EurekAlert,

CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.

All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.

“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”

Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.

The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.

The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”

The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.

Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.

###

About ACM, ACM SIGGRAPH, and SIGGRAPH 2018

ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.

They have provided an image illustrating what they mean (I don’t find it especially informative),

Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn

Next up (2), there’s Disney Corporation’s first virtual reality (VR) short, from a July 19, 2018  Association for Computing Machinery news release on EurekAlert,

Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.

Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.

“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”

For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.

SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.

“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”

This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”

Apparently this is a still from the ‘short’,

Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios

There’s also something (3) from Google as described in a July 26, 2018 Association of Computing Machinery news release on EurekAlert,

Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.

Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.

“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”

To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.

Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec

to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.

The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)

Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.

Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.

“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”

I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,

Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck

Finally (4), ‘synthesizing realistic sounds’ is announced in an Aug. 6, 2018 Stanford University (US) news release (also on EurekAlert) by Taylor Kubota,

Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.

“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”

The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.

“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”

Predicting sound

Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.

Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.

“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.

The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.

Challenges ahead

In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.

And, even in its current state, the results are worth the wait.

“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”

Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.

Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.

Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,

The researchers have also provided this image,

By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)

It does seem like we’re synthesizing the world around us, eh?

The SIGGRAPH 2018 art gallery

Here’s what SIGGRAPH had to say about its 2018 art gallery in Vancouver and the themes for the conference and the gallery (from a May 18, 2018 Associating for Computing Machinery news release on globalnewswire.com (also on this 2018 SIGGRAPH webpage),

SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.

The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.

Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”

He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”

Highlights from the 2018 Art Gallery include:

Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver

TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.

Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara

Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”

Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University

Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.

In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.

The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.

To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.

“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.

Art Papers highlights include:

Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth

This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.

Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong

The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.

Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University

“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.

What’s the what?

My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.

Game design for scientific participation

Thanks to David Bruggeman for his Feb. 13, 2014 post (on the Pasco Phronesis blog) about a US National Science Foundation (NSF) webinar on designing scientific games and where he has embedded a video of a mobile game from Cancer Research UK. (His blog is well worth checking out for the information on science entertainment, as well as, his main topic, science policy.)

The upcoming NSF webinar is titled, From World of Warcraft to Fold.it and Beyond; The Opportunities & Challenges to Designing Games for Scientific Participation and will be held on Friday, Feb. 21, 2014 (1 hr.),

February 21, 2014 12:00 PM  to  February 21, 2014 1:00 PM
NSF Room 110

Designing Disruptive Learning Technologies Webinar Series

Kurt Squire – University of Wisconsin-Madison

Abstract:

Digital games like World of Warcraft and Fold.it are compelling examples of how technology can engage thousands of learners in solving complex problems — even in making scientific discoveries. But what does it take to foster learning in the midst of such enthusiastic engagement? In this presentation, I will draw from a decade of research in how people learn and interact in online gaming environments and present findings from our work designing online environments for science learning. I will present pedagogical models for integrating gaming technologies into classrooms and research exploring how these games work for learning. Both the potential of games for science learning and challenges for leveraging gaming technologies at scale will be presented, as well as implications for further research on how people learn.

Bio:

Kurt Squire is a Romnes Professor in Digital Media in Curriculum and Instruction at the University of Wisconsin-Madison and Director of the Games+Learning+Society Theme at the Wisconsin Institute for Discovery. Squire is also a co-founder and Vice President of Research for the Learning Games Network, a non-profit network expanding the role of games and learning. Squire is an internationally recognized leader in digital media in technology and has delivered dozens of invited addresses across Europe, Asia, and North America and written over 75 scholarly articles on digital media and education. Squire’s research investigates the potential of digital game-based technologies for learning, and has resulted in several software projects including ARIS, Virulent, Citizen Science, among others. Squire is the recipient of an NSF CAREER grant, and grants from the NSF, Gates Foundation, MacArthur Foundation, AMD Foundation, Microsoft, Data Recognition Corporation and others. Squire was also a co-founder of Joystick101.org, and for several years wrote a column with Henry Jenkins for Computer Games magazine.

Webinar

The Webinar will be held from 12:00pm to 1:00pm Eastern Time on Friday, Feburary 21, 2014.

Please register at https://nsf.webex.com/nsf/j.php?ED=239652927&RG=1&UID=0&RT=MiMxMQ%3D%3D  by 11:59pm Eastern Time on Thursday, February 20, 2014.

After your registration is accepted, you will receive an email with a URL to join the meeting. Please be sure to join a few minutes before the start of the webinar. This system does not establish a voice connection on your computer; instead, your acceptance message will have a toll-free phone number that you will be prompted to call after joining. In the event the number of requests exceeds the capacity, some requests may have to be denied.

This event is part of Webinars/Webcasts.

Meeting Type
Webcast

Contacts
Natalie Harr, (703) 292-8930, nharr@nsf.gov

Good luck with your registration.  This webinar does seem to be open internationally although I imagine priority will be given to registrants located in the US.

2013 International Science & Engineering Visualization Challenge Winners

Thanks to a RT from @coreyspowell I stumbled across a Feb. 7, 2014 article in Science (magazine) describing the 2013 International Science & Engineering Visualization Challenge Winners. I am highlighting a few of the entries here but there are more images in the article and a slideshow.

First Place: Illustration

Credit: Greg Dunn and Brian Edwards, Greg Dunn Design, Philadelphia, Pennsylvania; Marty Saggese, Society for Neuroscience, Washington, D.C.; Tracy Bale, University of Pennsylvania, Philadelphia; Rick Huganir, Johns Hopkins University, Baltimore, Maryland

Cortex in Metallic Pastels. Credit: Greg Dunn and Brian Edwards, Greg Dunn Design, Philadelphia, Pennsylvania; Marty Saggese, Society for Neuroscience, Washington, D.C.; Tracy Bale, University of Pennsylvania, Philadelphia; Rick Huganir, Johns Hopkins University, Baltimore, Maryland

From the article, a description of Greg Dunn and his work,

With a Ph.D. in neuroscience and a love of Asian art, it may have been inevitable that Greg Dunn would combine them to create sparse, striking illustrations of the brain. “It was a perfect synthesis of my interests,” Dunn says.

Cortex in Metallic Pastels represents a stylized section of the cerebral cortex, in which axons, dendrites, and other features create a scene reminiscent of a copse of silver birch at twilight. An accurate depiction of a slice of cerebral cortex would be a confusing mess, Dunn says, so he thins out the forest of cells, revealing the delicate branching structure of each neuron.

Dunn blows pigments across the canvas to create the neurons and highlights some of them in gold leaf and palladium, a technique he is keen to develop further.

“My eventual goal is to start an art-science lab,” he says. It would bring students of art and science together to develop new artistic techniques. He is already using lithography to give each neuron in his paintings a different angle of reflectance. “As you walk around, different neurons appear and disappear, so you can pack it with information,” he says.

People’s Choice:  Games & Apps

Meta!Blast: The Leaf. Credit: Eve Syrkin Wurtele, William Schneller, Paul Klippel, Greg Hanes, Andrew Navratil, and Diane Bassham, Iowa State University, Ames

Meta!Blast: The Leaf. Credit: Eve Syrkin Wurtele, William Schneller, Paul Klippel, Greg Hanes, Andrew Navratil, and Diane Bassham, Iowa State University, Ames

More from the article,

“Most people don’t expect a whole ecosystem right on the leaf surface,” says Eve Syrkin Wurtele, a plant biologist at Iowa State University. Meta!Blast: The Leaf, the game that Wurtele and her team created, lets high school students pilot a miniature bioship across this strange landscape, which features nematodes and a lumbering tardigrade. They can dive into individual cells and zoom around a chloroplast, activating photosynthesis with their ship’s search lamp. Pilots can also scan each organelle they encounter to bring up more information about it from the ship’s BioLog—a neat way to put plant biology at the heart of an interactive gaming environment.

This is a second recognition for Meta!Blast, which won an Honorable Mention in the 2011 visualization challenge for a version limited to the inside of a plant cell.

The Metablast website homepage describes the game,

The last remaining plant cell in existence is dying. An expert team of plant scientists have inexplicably disappeared. Can you rescue the lost team, discover what is killing the plant, and save the world?

Meta!Blast is a real-time 3D action-adventure game that puts you in the pilot’s seat. Shrink down to microscopic size and explore the vivid, dynamic world of a soybean plant cell spinning out of control. Interact with numerous characters, fight off plant pathogens, and discover how important plants are to the survival of the human race.

Enjoy!

A chance to game the future Sept. 25 and 26, 2013 starting 9 am PDT

Thanks to David Bruggeman at his Pasco Phronesis blog (his Sept. 20, 2013 posting)  for featuring a 36-hour conversation/game (which is recruiting players/participants). It is  called  Innovate 2038  and you do have to pre-register for this game. For anyone who likes a little more information before jumping into to join, here’s what the Innovate 2038 About page has to offer,

About Innovate2038

The traditional paths to research and technology innovation will no longer work for the critical challenges and new opportunities of 2038.

Increasingly constrained resources, the rise of mega-cities, and rapidly shifting developments in business processes, regulations, and consumer sentiment will present epic challenges to business as usual.

At the same time, opportunities will abound. Emerging fields like 3D-printing and additive manufacturing, synthetic biology, and data modeling will catalyze the next generation of products, services, and markets—if research and innovation can lead the way.

But managing all of these research and innovation efforts will require new tools and technologies, new skills in project and talent management, new players and collaborations, and ultimately a collective re-imagining of the value proposition of research to society.

Innovate2038 is a 36-hour global conversation based on IRI’s extensive IRI2038 research project to uncover new ideas and new strategies that can reinvent the very concept of R&D and technology innovation management for the 21st century.

On Sept 25 & 26, 2013, Innovate2038 will take place in corporations, labs, classrooms, but also hacker-spaces, online innovation communities, and networks of researchers and makers, in countries around the world.

Innovate2038 will bring together current leading voices together with emerging and below-the-radar new players that will be increasingly important to the practice of research and innovation.

The platform to support the conversation is itself a signal of the future—a cutting-edge crowdsourced game called Foresight Engine, developed and facilitated by the Institute for the Future. It’s designed to spark new ideas and inspire collaborations among hundreds of people around the world. [emphasis mine]

In as little as five minutes, you can log on to share rapid-fire micro-contributions that will help make the future of research and innovation heading out to 2038.
For a day and half, you can compete to win points, achieve awards, and gain the recognition of the leading thinkers in technology management today.

Pre-register right now as a ‘game insider’ to be the first to know about the game leading up to the Sept 25 launch.

David notes that this ’36-hour conversation/game is part of a larger project, from his Sept. 20, 2013 Pasco Phronesis posting (Note: Links have been removed),

… This is part of the Industrial Research Institute’s project on 2038 Future, which focuses on the art and science of research and development management.  That project has involved possible future scenarios, retrospective examinations of research management, and scanning the current environment.  The game engine was developed by the Institute for the Future, and is called the Foresight Engine.  The basics of the engine encourage participants to contribute short ideas, with points going to those ideas that get approved and/or built on by other participants.

Here’s more about the  Industrial Research Institute (IRI) from their History webpage,

Fourteen companies comprised the original membership of the Institute when it was formed in 1938, under the auspices of the [US] National Research Council (NRC). Four of these companies retain membership today: Colgate-Palmolive Company, Procter & Gamble Company, Hercules Powder Company (now Ashland, Inc.), and UOP, LLC, formerly known as Universal Oil Products (acquired by Honeywell). Four of the first five presidents were from the six charter-member-company category.

Maurice Holland, then Director of the NRC Division of Engineering, was largely responsible for bringing together about 50 representatives from industry, government, and universities to an initial organizational meeting in February 1938 in New York City. The Institute was an integral part of the National Research Council until 1945, when it separated to become a non-profit membership corporation in the State of New York. However, association with the Council continues unbroken.

At the founding meeting, several speakers stressed the need for an association of research directors–something different from the usual technical society–and that the benefits to be derived would depend on the extent of cooperation by its members. The greatest advantage, they said, would come through personal contacts with members of the group–still a major characteristic of IRI.

In more recent years, the activities of the Association have broadened considerably. IRI now offers services to the full range of R&D and innovation professionals in the United States and abroad.

I went exploring and found this about the game developer, Institute for the Future  (IFTF) on the website’s Who We Are page (Note: Links have been removed),

IFTF is an independent, non-profit research organization with a 45-year track record of helping all kinds of organizations make the futures they want. Our core research staff and creative design studio work together to provide practical foresight for a world undergoing rapid change.

….

Here’s more about the Foresight Engine , the “cutting-edge crowdsourced game,” from the IFTF website’s Collaborative Forecasting Games webpage,

Collaborative Forecasting Games: a crowd’s view of the future

Collaborative forecasting games engage a large and diverse group of people—potentially from around the world—to imagine futures that might go unnoticed by a team of experts. These crowds may include the general public, a targeted sector of the public, or the entire staff of a private organization. And the games themselves can range from futures brainstorming to virtual innovation gameboards and even rich narrative platforms for telling important stories about the future.

Foresight Engine

IFTF has a collaborative forecasting platform called Foresight Engine that makes it easy to set up games without a lot of investment in game design. In the tradition of brainstorming, the platform invites people to play positive or critical ideas about the future and then to build on these ideas to forms chains of discussion—complete with points, awards, and achievements for winning ideas. While the focus of the platform is on Twitter-length ideas of 140 characters or less, a Foresight Engine game does much more than harvest innovative ideas. It builds a literacy among players about the future issues addressed by the game, and it also provides a window on the crowd’s level of understanding of complex futures—laying the foundation for future literacy building. It shows who inspires the greatest following and often surfaces potential thought leaders.

It’s always interesting to dig into an organization’s history (from the IFTF’s History page,

The Institute for the Future has 45 years of forecasts on which to reflect. We’re based in California’s Silicon Valley—a community at the crossroads of technological innovation, social experimentation, and global interchange. Founded in 1968 by a group of former RAND Corporation researchers with a grant from the Ford Foundation to take leading-edge research methodologies into the public and business sectors, IFTF is committed to building the future by understanding it deeply.

I wonder if Innovate 2038 game/conversation will take place in any language other than English? In any event, I just tried to register and couldn’t.  I hope this is a problem on my end rather than the organizers’ as I know how devastating it can be to have your project encounter this kind of roadblock just before launching.

Internship at Science and Technology Innovation Program in Washington, DC

The Woodrow Wilson International Center for Scholars is advertizing for a media-focused intern for Spring 2013. From the Dec. 12, 2012 notice,

The Science and Technology Innovation Program (STIP) at the Woodrow Wilson International Center for Scholars is currently seeking a media-focused intern for Spring 2013. The mission of STIP is to explore the scientific and technological frontier, stimulating discovery and bringing new tools to bear on public policy challenges that emerge as science advances.

Specific project areas include: nanotechnology, synthetic biology, Do-It-Yourself biology, the use of social media in disaster response, serious games, geoengineering, and additive manufacturing. Interns will work closely with a small, interdisciplinary team.

  • Applicants should be a graduate or undergraduate student with a background or strong interest in journalism, science/technology policy, public policy and/or policy analysis.
  • Solid reporting, writing and computer skills are a must. Experience with video/audio editing and new media is strongly desired.
  • Responsibilities include assisting with the website/social media, writing and editing, helping produce and edit short-form videos, staffing events and other duties as assigned.
  • Applicants should be creative, ready to engage in a wide variety of tasks and able to work independently and with a team in a fast-paced environment.
  • The internship is expected to last for 3-5 months at 15-20 hours per week. Scheduling is flexible.
  • Please include 2-3 writing samples/clips and links to any video/documentary work.
  • Compensation may be available.

To apply, please submit a cover letter, resume, and brief writing sample to stipintern@wilsoncenter.org with SPRING 2013 INTERN in the subject line.

There doesn’t seem to be any additional information about the internship on the Wilson Center but you can check for yourself here. Good luck!

RNA (ribonucleic acid) video game

I am a great fan of  Foldit, a protein-folding game I have mentioned several times here (my first posting about Foldit was Aug. 6, 2010) and now via the Foresight Insitute’s July 16, 2012 blog posting, I have discovered an RNA video game (Note: I have removed links),

As we pointed out a few months ago, the greater complexity of folding rules for RNA compared to its chemical cousin DNA gives RNA a greater variety of compact, three-dimensional shapes and a different set of potential functions than is the case with DNA, and this gives RNA nanotechnology a different set of advantages compared to DNA nanotechnology … Proteins have even more complex folding rules and an even greater variety of structures and functions. We also noted here that online gamers playing Foldit topped scientists in redesigning a protein to achieve a novel enzymatic activity that might be especially useful in developing molecular building blocks for molecular manufacturing. Now KurzweilAI.net brings news of an online game that allows players to design RNA molecules …

Here’s more from the KurzwelAI.net June 26, 3012 posting about the new RNA game EteRNA,

EteRNA, an online game with more than 38,000 registered users, allows players to design molecules of ribonucleic acid — RNA — that have the power to build proteins or regulate genes.

EteRNA players manipulate nucleotides, the fundamental building blocks of RNA, to coax molecules into shapes specified by the game.

Those shapes represent how RNA appears in nature while it goes about its work as one of life’s most essential ingredients.

EteRNA was developed by scientists at Stanford and Carnegie Mellon universities, who use the designs created by players to decipher how real RNA works. The game is a direct descendant of Foldit — another science crowdsourcing tool disguised as entertainment — which gets players to help figure out the folding structures of proteins.

Here’s how the EteRNA folks describe this game (from the About EteRNA page),

By playing EteRNA, you will participate in creating the first large-scale library of synthetic RNA designs. Your efforts will help reveal new principles for designing RNA-based switches and nanomachines — new systems for seeking and eventually controlling living cells and disease-causing viruses. By interacting with thousands of players and learning from real experimental feedback, you will be pioneering a completely new way to do science. Join the global laboratory!

The About EteRNA webpage also offers a discussion about RNA,

RNA is often called the “Dark Matter of Biology.” While originally thought to be an unstable cousin of DNA, recent discoveries have shown that RNA can do amazing things. They play key roles in the fundamental processes of life and disease, from protein synthesis and HIV replication, to cellular control. However, the full biological and medical implications of these discoveries is still being worked out.

RNA is made of four nucleotides (A, C,G,and U, which stand for adenine, cytosine, guanine, and uracil). Chemically, each of these building blocks is made of atoms of carbon, oxygen, nitrogen, phosphorus, and hydrogen. When you design RNAs with EteRNA, you’re really creating a chain of these nucleotides.

RNA Nucleotides (from the About EteRNA webpage)

Scientists do not yet understand all of RNA’s roles, but we already know about a large collection of RNAs that are critical for life: (see the Thermus Thermophilus image representing following points)

  1. mRNAs are short copies of a cell’s DNA genome that gets cut up, pasted, spliced, and otherwise remixed before getting translated into proteins.
  1. rRNA forms the core machinery of an ancient machine, the ribosome. This machine synthesizes the proteins of your cells and all living cells, and is the target of most antibiotics.
  2. miRNAs (microRNAs) are short molecules (about 22-letters) that are used by all complex cells as commands for silencing genes and appear to have roles in cancer, heart disease, and other medical problems.
  3. Riboswitches are ubiquitous in bacteria. They sense all sorts of small molecules that could be food or signals from other bacteria, and turn on or off genes by changing their shapes. These are interesting targets for new antibiotics.
  4. Ribozymes are RNAs that can act as enzymes. They catalyze chemical reactions like protein synthesis and RNA splicing, and provide evidence of RNA’s dominance in a primordial stage of Life’s evolution.
  5. Retroviruses, like Hepatitis C, poliovirus, and HIV, are very large RNAs coated with proteins.
  6. And much much more… shRNA, piRNA, snRNA, and other new classes of important RNAs are being discovered every year.

Thermus Thermophilus – Large Subunit Ribosomal RNA
Source: Center for Molecular Biology (downloaded from the About EteRNA webpage)

I do wonder about the wordplay EteRNA/eternal. Are these scientists trying to tell us something?

God from the machine: Deus ex machina and augmentation

Wherever you go, there it is: ancient Greece. Deus Ex, a game series from Eidos Montréal, is likely referencing ‘deus ex machina’, a term applied to a theatrical device (in both senses of the word) attributed to  playwrights of ancient Greece. (For anyone who’s unfamiliar with the term, at the end of a play, all of the conflicts would be resolved by a god descending from the heavens. The term refers both to the plot device itself and to the mechanical device used to lower the ‘god’.)

The latest game in the series, Deus Ex: Human Revolution, a role-playing shooter, will be released August 23, 2011. From the August 16, 2011 article by Susan Karlin for Fast Company,

The result—Deus Ex: Human Revolution, a role-playing shooter that comes out August 23–extrapolates MicroTransponder, prosthetics, robotics, and other current augmentation technology into a vision of how technologically enhanced people might gain superhuman abilities and at what cost.

… “We built a timeline that traces the history of augmentation, creating new things, and predicting how would it get out into society. We wanted to ground it in today, and make something where everyone could say, ‘I can see the world going that way.'” [Mary DeMarle, Human Revolution’s lead writer]

Human Revolution, although the third in the series, is a prequel to the original Deus Ex which took place 25 years after Human Revolution.

I’m glad to see games that bring up interesting philosophical questions and possible social impacts of emerging technologies along with the action. In a February 3, 2011 interview with Mary DeMarle, Quintin Smith of Rock, Paper, Shotgun posed these questions,

RPS: Finally, with anti-augmentation groups featuring in Human Revolution, I was just wondering what your own opinions are on human augmentation and human bioengineering are.

MD: Oh, gosh. Well I have to tell you that the joke on the team is that for the duration of this story I’d be supporting the anti-technology view, because most people on the team wouldn’t be anti-technology, and it’d help me make the game more human, you know? And now that the project’s over I bought my first iPad, and I have to admit I’m suddenly like “You know, if I could get one of those InfoLinks in my head, it’d be really useful.”

But you know, all of this stuff is already out there. We already have people putting cameras in their eyes to improve their vision. [emphasis mine] The technology’s there, we’re just not aware of it. As far as our team’s technology expert is concerned, human augmentation’s been going on for decades. If you look at all the sports controversy regarding drugs, that is augmentation. It’s already happening.

RPS: But you have no qualms with our using technology to make ourselves more than we can be?

MD: From my perspective, I think mankind will always try to be more than he is. That’s part of being human. But I do admit we have to be careful about how we do it.

In my February 2, 2010 posting (scroll down about 1/2 way), I featured a quote that resonates with DeMarle’s comments about humans trying to be more,

“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.”

Bailey went on to say that having machinery incorporated into his body made him feel “above human”.

As for cameras being implanted in eyes to improve vision, I would be delighted to hear from anyone who has information about this. The only project I could find in my search was EyeBorg, a project with a one-eyed Canadian filmmaker who was planning to have a video camera implanted into his eye socket to record images. From the About the Project page,

Take a one eyed film maker, an unemployed engineer, and a vision for something that’s never been done before and you have yourself the EyeBorg Project. Rob Spence and Kosta Grammatis are trying to make history by embedding a video camera and a transmitter in a prosthetic eye. That eye is going in Robs eye socket, and will record the world from a perspective that’s never been seen before.

There are more details about the EyeBorg project in a June 11, 2010 posting by Tim Hornyak for the Automaton blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),

When Canadian filmmaker Rob Spence was a kid, he would peer through the bionic eye of his Six Million Dollar action figure. After a shooting accident left him partially blind, he decided to create his own electronic eye. Now he calls himself Eyeborg.

Spence’s bionic eye contains a battery-powered, wireless video camera. Not only can he record everything he sees just by looking around, but soon people will be able to log on to his video feed and view the world through his right eye.

I don’t know how the Eyeborg project is proceeding as there haven’t been any updates on the site since August 25, 2010.

While I wish Quintin Smith had asked for more details about the science information DeMarle was passing on in the February 3, 2011 interview, I think it’s interesting to note that information about science and technology comes to us in many ways: advertisements, popular television programmes, comic books, interviews, and games, as well as, formal public science outreach programmes through museums and educational institutions.

ETA August 19, 2011: I found some information about visual prosthetics at the European Commission’s Future and Emerging Technologies (FET) website, We can rebuild you page featuring a TEDxVienna November 2010 talk by electrical engineer, Grégoire Cosendai, from the Swiss Federal Institute of Technology. He doesn’t mention the prosthetics until approximately 13 minutes, 25 seconds into the talk. The work is being done to help people with retinitis pigmentosa, a condition that is incurable at this time but it may have implications for others. There are 30 people worldwide in a clinical trial testing a retinal implant that requires the person wear special glasses containing a camera and an antenna. For Star Trek fans, this seems similar to Geordi LaForge‘s special glasses.

ETA Sept. 13, 2011: Better late than never, here’s an excerpt from Dexter Johnson’s Sept. 2, 2011 posting (on his Nanoclast blog at the Institute of Electrical and Electronics Engineers [IEEE] website) about a nano retina project,

The Israel-based company [Nano Retina] is a joint venture between Rainbow Medical and Zyvex Labs, the latter being well known for its work in nanotechnology and its founder Jim Von Ehr, who has been a strong proponent of molecular mechanosynthesis.

It’s well worth contrasting the information in the company video that Dexter provides and the information in the FET video mentioned in the Aug. 19, 2011 update preceding this one. The company presents a vastly more optimistic claim for the vision these implants will provide than one would expect after viewing the information in the FET video about clinical trials, for another similar (to me) system, currently taking place.

Science education for children in Europe, so what’s happening in BC?

I’ve been informally collecting information about children’s science education for a few months when yesterday there was a sudden explosion of articles (well, there were three) on the subject.

First off, a science game was launched by the European Commission titled Power of Research. From the March 2, 2011 news item on Nanowerk,

A new strategy browser game – the “Power of research” – is officially launched. Supported by the European Commission, “Power of Research” has been developed to inspire young Europeans to pursue scientific careers and disseminate interesting up-to-date scientific information. Players assume the role of scientists working in a virtual research environment that replicates the situations that scientists have to deal with in the real world. The game, which can be played for free under www.powerofresearch.eu, is expected to create a large community of more than 100,000 players who will be able to communicate in real time via a state of the art interface.

They really do mean it when they say they’re replicating real life situations but the focus is on medical science research and I don’t think the game title makes that clear. Yes, there are many similarities between the situations that scientists of any stripe encounter in their labs but there are also some significant differences between them. In any event,

In “Power of Research” players can engage in “virtual” health research projects, by performing microscopy, protein isolation and DNA experiments, publishing research results, participating in conferences, managing high tech equipment and staff or request funding – all tasks of real researchers. The decisive game elements are communication, collaboration and competition: players can compete against each other in real time or collaborate to become a successful virtual researcher, win scientific awards or become the leader of a research institute.

The game connects the players to the real world. It is based on up-to-date science content and players work on real world research topics inspired by the FP7 health research programme that will be regularly updated. Popular science events, real research institutes, universities and European health research projects form part of the game. Players also have access to a knowledge platform, where they can search in a virtual library, zoom-into real scientific images and learn more about Nobel Prize laureates. European science institutions and hospitals will have the possibility to contribute to the game and provide details about their research.

I like the immersiveness and the game aspect of this project very much. I do wish they were a little more clear about exactly what kind of research the player will engage in. From the Power of Research About webpage,

Your researcher

* Become a famous researcher in “Power of Research”

* Research different topics through exciting research projects

* Play together with your friends and other players from all over the world

* Earn reputation, win science prizes and more …

* Gain special skills and knowledge in 9 different main research areas (like Brain, Paediatrics, …)

* Become a leader in your institute and lead it to international ranks

* The game is 100% free and needs no prior knowledge

Meanwhile, there are more projects. From the March 2, 2011 news item on physorg.com,

Children who are taught how to think and act like scientists develop a clearer understanding of the subject, a study has shown.

The research project led by The University of Nottingham and The Open University has shown that school children who took the lead in investigating science topics of interest to them gained an understanding of good scientific practice.

The study shows that this method of ‘personal inquiry’ could be used to help children develop the skills needed to weigh up misinformation in the media, understand the impact of science and technology on everyday life and help them to make better personal decisions on issues including diet, health and their own effect on the environment.

The three-year project involved providing pupils aged 11 to 14 at Hadden Park High School in Bilborough, Nottingham, and Oakgrove School in Milton Keynes with a new computer toolkit named nQuire, now available as a free download for teachers and schools.

The pupils were given wide themes for their studies but were asked to decide on more specific topics that were of interest to them, including heart rate and fitness, micro climates, healthy eating, sustainability and the effect of noise pollution on birds.

The flexible nature of the toolkit meant that children could become “science investigators”, starting an inquiry in the classroom then collecting data in the playground, at a local nature reserve, or even at home, then sharing and analysing their findings back in class.

Immersive and engaging, yes? I have gone to the nQuire website and while I haven’t downloaded the software, I did successfully log in to the demonstration, in other words, the demonstration is not limited to a UK-based audience.

Meanwhile there’s this project but it seems to be different. It’s spelled differently, INQUIRE, and the focus is on the teachers. From a March 2, 2011 news item on Science Daily,

Thousands of schoolchildren will soon be asking the questions when inquiry-based learning comes to science classrooms across Europe, turning the traditional model of science teaching on its head. The pan-European INQUIRE programme is an exciting new teacher-training initiative delivered by a seventeen-strong consortium of botanic gardens, natural history museums, universities and NGOs.

Coordinated by Innsbruck University Botanic Garden, with support from London-based Botanic Gardens Conservation International (BGCI), INQUIRE is a practical, one-year, continual professional development (CPD) course targeted at qualified teachers working in eleven European countries. Its focus on inquiry-based science education (IBSE) reflects a consensus in the science education community that IBSE methods are more effective than current teaching practices.

Designed to reflect how students actually learn, IBSE also engages them in the process of scientific inquiry. Increasingly it is seen as key to developing their scientific literacy, enhancing their understanding of scientific concepts and heightening their appreciation of how science works. Whereas traditional teaching methods have failed to engage many students, especially in developed countries, IBSE offers outstanding opportunities for effective and enjoyable teaching and learning.

Biodiversity loss and global climate change, among the major scientific as well as political challenges of our age, are core INQUIRE concerns.

That final sentence fragment is a  little puzzling but I believe they’re describing their scientific focus.

My favourite of these projects is one I came across in December 2010 when children from a school in England had a research paper about bees published by the Royal Society’s Biology Letters. You still can access the paper (according to another blogger, Ed Yong, open access would only last to the new year in 2011 but they must have changed their minds). The paper is titled Blackawton bees and lists 30 authors.

1. P. S. Blackawton,
2. S. Airzee,
3. A. Allen,
4. S. Baker,
5. A. Berrow,
6. C. Blair,
7. M. Churchill,
8. J. Coles,
9. R. F.-J. Cumming,
10. L. Fraquelli,
11. C. Hackford,
12. A. Hinton Mellor,
13. M. Hutchcroft,
14. B. Ireland,
15. D. Jewsbury,
16. A. Littlejohns,
17. G. M. Littlejohns,
18. M. Lotto,
19. J. McKeown,
20. A. O’Toole,
21. H. Richards,
22. L. Robbins-Davey,
23. S. Roblyn,
24. H. Rodwell-Lynn,
25. D. Schenck,
26. J. Springer,
27. A. Wishy,
28. T. Rodwell-Lynn,
29. D. Strudwick and
30. R. B. Lotto

This is from the introduction to the paper,

(a) Once upon a time …

People think that humans are the smartest of animals, and most people do not think about other animals as being smart, or at least think that they are not as smart as humans. Knowing that other animals are as smart as us means we can appreciate them more, which could also help us to help them.

If you don’t ever read another science paper in your life, read this one. For the back story on this project, here’s Ed Yong on his Not Exactly Rocket Science blog (a Discover blog) in a December 21, 2010 posting,

“We also discovered that science is cool and fun because you get to do stuff that no one has ever done before.”

This is the conclusion of a new paper published in Biology Letters, a high-powered journal from the UK’s prestigious Royal Society. If its tone seems unusual, that’s because its authors are children from Blackawton Primary School in Devon, England. Aged between 8 and 10, the 25 children have just become the youngest scientists to ever be published in a Royal Society journal.

Their paper, based on fieldwork carried out in a local churchyard, describes how bumblebees can learn which flowers to forage from with more flexibility than anyone had thought. It’s the culmination of a project called ‘i, scientist’, designed to get students to actually carry out scientific research themselves. The kids received some support from Beau Lotto, a neuroscientist at UCL [University College London], and David Strudwick, Blackawton’s head teacher. But the work is all their own.

Yong’s posting features a video of  the  i, scientist project mentioned in the posting, images, and, of course, the rest of the back story.

As it turns out one of my favourite science education/engagement projects is taking place right now (this is based in the UK), I’m a scientist, Get me out of Here!, from their website home page,

I’m a Scientist, Get me out of Here! is an award-winning science enrichment and engagement activity, funded by the Wellcome Trust. It takes place online over a two week period. It’s an X Factor-style competition for scientists, where students are the judges. Scientists and students talk online on this website. They both break down barriers, have fun and learn. But only the students get to vote.

You can view the scientist/student conversations by picking a zone: Argon, Chlorine, Potassium, Forensic, Space, or Stem Cell. The questions the kids ask are fascinating, anything from What’s your favourite colour? to Do you think humans will evolve more? The conversations that ensue can be quite stimulating. This project has been mentioned here before in my June 15, 2010 posting, April 13, 2010 posting (scroll down) and  March 26, 2010 posting (scroll down).

ETA Mar. 3, 2011: The scientists get quite involved and can go to some lengths to win. Here’s Tom Hartley’s video from last year’s (2010) event,

I find the contrast between these kinds of science education/engagement projects in the UK and in Europe and what seems to be a dearth of these in my home province British Columbia (Canada) to be striking. I’ve commented previously on BC’s Year of Science initiative currently taking place in a Dec. 30, 2010 posting where I was commenting on a lack of science culture in Canada. Again, I applaud the initiative while I would urge that in future a less traditional and top/down approach is taken. The Europeans and the British are making science fun by engaging in imaginative and substantive ways. Imagine what getting a paper published in a prestigious science journal does for you (regardless of your age)!

Phylo and crowdsourcing science by Canadian researchers

Alex Kawrykow and Gary Roumanis from McGill University (Montréal, Québec) have launched Phylo, a genetics game that anyone can play but is actually genetic research. From the article by Neal Ungerleider at the Fast Company website,

The new project, Phylo, was launched by a team at Montreal’s McGill University on November 29. Players are allowed to recognize and sort human genetic code that’s displayed in a Tetris-like format. Phylo, which runs in Flash, allows users to parse random genetic codes or to tackle DNA patterns related to real diseases. In a random game, a user found himself assigned to DNA portions linked to exudative vitreoretinopathy 4 and vesicoureteral reflux 2.

Players choose from a variety of categories such as digestive system diseases, heart diseases, brain diseases and cancer. All the DNA portions in the game are linked to different diseases. Once completed, they are analyzed and stored in a database; McGill intends to use players’ results in the game to optimize future genetic research.

This reminds me of Foldit (mentioned in my Aug. 6, 2010 posting) another multiplayer online biology-type game; that time the focus was protein folding. As Ungerleider notes in his article, gaming is being used in education, advertising, and media. I’ll add this,  it’s also being used for military training.

I was interested to note that the McGill game was made possible by these agencies,

* Natural Sciences and Engineering Research Council of Canada
* McGill School of Computer Science
* McGill Centre for Bioinformatics
* McGill Computational Structural Biology Group

On a side note, there’s another biology-type game called Phylo, it’s a trading card game designed by David Ng, a professor at the University of British Columbia. From the Phylo, trade card game About page,

What is this phylo thing? (Some interesting but relatively specific FAQs here)

Well, it’s an online initiative aimed at creating a Pokemon card type resource but with real creatures on display in full “artistic” wonder. Not only that – but we plan to have the scientific community weigh in to determine the content on such cards, as well as folks who love gaming to try and design interesting ways to use the cards. Then to top it all off, members of the teacher community will participate to see whether these cards have educational merit. Best of all, the hope is that this will all occur in a non-commercial-open-access-open-source-because-basically-this-is-good-for-you-your-children-and-your-planet sort of way.

The Phylo, trading card game is in Beta (for those not familiar with the term beta, it means the game is still being tested, so there may be ‘bugs’).

It’s nice to be able to report on some innovative Canadian crowdsourcing science.

Math, science and the movies; research on the African continent; diabetes and mice in Canada; NANO Magazine and Canada; poetry on Bowen Island, April 17, 2010

About 10 years ago, I got interested in how the arts and sciences can inform each other when I was trying to organize an art/science event which never did get off the ground (although I still harbour hopes for it one day).  It all came back to me when I read Dave Bruggeman’s (Pasco Phronesis blog) recent post about a new Creative Science Studio opening at the School of Cinematic Arts at the University of Southern California (USC). From Dave’s post,

It [Creative Science Studio] will start this fall at USC, where its School of Cinematic Arts makes heavy use of its proximity to Hollywood, and builds on its history of other projects that use science, technology and entertainment in other areas of research.

The studio will not only help studios improve the depiction of science in the products of their students, faculty and alumni (much like the Science and Entertainment Exchange), but help scientists create entertaining outreach products. In addition, science and engineering topics will be incorporated into the School’s curriculum and be supported in faculty research.

This announcement reminds me a little bit of an IBM/USC initiative in 2008 (from the news item on Nanowerk),

For decades Hollywood has looked to science for inspiration, now IBM researchers are looking to Hollywood for new ideas too.

The entertainment industry has portrayed possible future worlds through science fiction movies – many created by USC’s famous alumni – and IBM wants to tap into that creativity.

At a kickoff event at the USC School of Cinematic Arts, five of IBM’s top scientists met with students and alumni of the school, along with other invitees from the entertainment industry, to “Imagine the World in 2050.” The event is the first phase of an expected collaboration between IBM and USC to explore how combining creative vision and insight with science and technology trends might fuel novel solutions to the most pressing problems and opportunities of our time.

It’s interesting to note that the inspiration is two-way if the two announcements are taken together. The creative people can have access to the latest science and technology work for their pieces and scientists can explore how an idea or solution to a problem that exists in a story might be made real.

I’ve also noted that the first collaboration mentioned suggests that the Creative Science Studio will be able to “help scientists create entertaining outreach products.” My only caveat is that scientists too often believe that science communication means that they do all the communicating while we members of the public are to receive their knowledge enthusiastically and uncritically.

Moving on to the math that I mentioned in the head, there’s an announcement of a new paper that discusses the use of mathematics in cinematic special effects. (I believe that the word cinematic is starting to include games and other media in addition to movies.)  From the news item on physorg.com,

The use of mathematics in cinematic special effects is described in the article “Crashing Waves, Awesome Explosions, Turbulent Smoke, and Beyond: Applied Mathematics and Scientific Computing in the Visual Effects Industry”, which will appear in the May 2010 issue of the NOTICES OF THE AMS [American Mathematical Society]. The article was written by three University of California, Los Angeles, mathematicians who have made significant contributions to research in this area: Aleka McAdams, Stanley Osher, and Joseph Teran.

Mathematics provides the language for expressing physical phenomena and their interactions, often in the form of partial differential equations. These equations are usually too complex to be solved exactly, so mathematicians have developed numerical methods and algorithms that can be implemented on computers to obtain approximate solutions. The kinds of approximations needed to, for example, simulate a firestorm, were in the past computationally intractable. With faster computing equipment and more-efficient architectures, such simulations are feasible today—and they drive many of the most spectacular feats in the visual effects industry.

This news item too brought back memories. There was a Canadian animated film, Ryan, which both won an Academy Award and involved significant collaboration between a mathematician and an animator. From the MITACS (Mathematics of Information Technology and Complex Systems)  2005 newsletter, Student Notes:

Karan Singh is an Associate Professor at the University of Toronto, where co-directs the graphics and HCI lab, DGP. His research interests are in artist driven interactive graphics encompassing geometric modeling, character animation and non-photorealistic rendering. As a researcher at Alias (1995-1999), he architected facial and character animation tools for Maya (Technical Oscar 2003). He was involved with conceptual design and reverse engineering software at Paraform (Academy award for technical achievement 2001) and currently as Chief Scientist for Geometry Systems Inc. He has worked on numerous film and animation projects and most recently was the R+D Director for the Oscar winning animation Ryan (2005)

Someone at Student Notes (SN) goes on to interview Dr. Singh (here’s an excerpt),

SN: Some materials discussing the film Ryan mention the term “psychorealism”. What does this term mean? What problems does the transition from realism to psychorealism pose for the animator, or the computer graphics designer?

KS: Psychorealism is a term coined by Chris {Landreth, film animator] to refer to the glorious complexity of the human psyche depicted through the visual medium of art and animation. The transition is not a problem, psychorealism is stylistic, just a facet to the look and feel of an animation. The challenges lies in the choice and execution of the metaphorical imagery that the animator makes.

Both the article and Dr. Singh’s page are well worth checking out, if the links between mathematics and visual imagery interest you.

Research on the African continent

Last week I received a copy of Thompson Reuters Global Research Report Africa. My hat’s off to the authors, Jonathan Adams, Christopher King, and Daniel Hook for including the fact that Africa is a continent with many countries, many languages, and many cultures. From the report, (you may need to register at the site to gain access to it but the only contact I ever get is a copy of their newsletter alerting me to a new report and other incidental info.), p. 3,

More than 50 nations, hundreds of languages, and a welter of ethnic and cultural diversity. A continent possessed of abundant natural resources but also perennially wracked by a now-familiar litany of post-colonial woes: poverty, want, political instability and corruption, disease, and armed conflicts frequently driven by ethnic and tribal divisions but supplied by more mature economies. OECD’s recent African Economic Outlook sets out in stark detail the challenge, and the extent to which current global economic problems may make this worse …

While they did the usual about challenges, the authors go on to add this somewhat contrasting information.

Yet the continent is also home to a rich history of higher education and knowledge creation. The University of Al-Karaouine, at Fez in Morocco, was founded in CE 859 as a madrasa and is identified by many as the oldest degree-awarding institution in the world.ii It was followed in 970 by Al-Azhar University in Egypt. While it was some centuries before the curriculum expanded from religious instruction into the sciences this makes a very early marker for learning. Today, the Association of African Universities lists 225 member institutions in 44 countries and, as Thomson Reuters data demonstrate, African research has a network of ties to the international community.

A problem for Africa as a whole, as it has been for China and India, is the hemorrhage of talent. Many of its best students take their higher degrees at universities in Europe, Asia and North America. Too few return.

I can’t speak for the details included in the report which appears to be a consolidation of information available in various reports from international organizations. Personally, I find these consolidations very helpful as I would never have the time to track all of this down. As well, they have created a graphic which illustrates research relationships. I did have to read the analysis in order to better understand the graphic but I found the idea itself quite engaging and as I can see (pun!) that as one gets more visually literate with this type of graphic that it could be a very useful tool for grasping complex information very quickly.

Diabetes and mice

Last week, I missed this notice about a Canadian nanotechnology effort at the University of Calgary. From the news item on Nanowerk,

Using a sophisticated nanotechnology-based “vaccine,” researchers were able to successfully cure mice with type 1 diabetes and slow the onset of the disease in mice at risk for the disease. The study, co-funded by the Juvenile Diabetes Research Foundation (JDRF), provides new and important insights into understanding how to stop the immune attack that causes type 1 diabetes, and could even have implications for other autoimmune diseases.

The study, conducted at the University of Calgary in Alberta, Canada, was published today [April 8, 2010?] in the online edition of the scientific journal Immunity.

NANO Magazine

In more recent news, NANO Magazine’s new issue (no. 17) features a country focus on Canada. From the news item on Nanowerk,

In a special bumper issue of NANO Magazine we focus on two topics – textiles and nanomedicine. We feature articles about textiles from Nicholas Kotov and Kay Obendorf, and Nanomedicine from the London Centre for Nanotechnology and Hans Hofstraat of Philips Healthcare and an interview with Peter Singer, NANO Magazine Issue 17 is essential reading, www.nanomagazine.co.uk.

The featured country in this issue is Canada [emphasis mine], notable for its well funded facilities and research that is aggressively focused on industrial applications. Although having no unifying national nanotechnology initiative, there are many extremely well-funded organisations with world class facilities that are undertaking important nano-related research.

I hope I get a chance to read this issue.

Poetry on Bowen Island

Heather Haley, a local Vancouver, BC area, poet is hosting a special event this coming Saturday at her home on Bowen Island. From the news release,

VISITING POETS Salon & Reading

Josef & Heather’s Place
Bowen Island, BC
7:30  PM
Saturday, April 17, 2010

PENN KEMP, inimitable sound poet from London, Ontario

The illustrious CATHERINE OWEN from Vancouver, BC

To RSVP and get directions please email hshaley@emspace.com

Free Admission
Snacks & beverages-BYOB

Please come on over to our place on the sunny south slope to welcome these fabulous poets, hear their marvelous work, *see* their voices right here on Bowen Island!

London, ON performer and playwright PENN KEMP has published twenty-five books of poetry and drama, had six plays and ten CDs produced as well as Canada’s first poetry CD-ROM and several videopoems.  She performs in festivals around the world, most recently in Britain, Brazil and India. Penn is the Canada Council Writer-in-Residence at UWO for 2009-10.  She hosts an eclectic literary show, Gathering Voices, on Radio Western, CHRWradio.com/talk/gatheringvoices.  Her own project for the year is a DVD devoted to Ecco Poetry, Luminous Entrance: a Sound Opera for Climate Change Action, which has just been released.
CATHERINE OWEN is a Vancouver writer who will be reading from her latest book Frenzy (Anvil Press 09) which she has just toured across the entirety of Canada. Her work has appeared in international magazines, seen translation into three languages and been nominated for honours such as the BC Book Prize and the CBC Award. She plays bass and sings in a couple of metal bands and runs her own tutoring and editing business.

I have seen one of Penn Kemp’s video poems. It was at least five years ago and it still resonates with me . Guess what? I highly recommend going if you can. If you’re curious about Heather and her work, go here.