Tag Archives: games

Futures exhibition/festival with fish skin fashion and more at the Smithsonian (Washington, DC), Nov. 20, 2021 to July 6, 2022

Fish leather

Before getting to Futures, here’s a brief excerpt from a June 11, 2021 Smithsonian Magazine exhibition preview article by Gia Yetikyel about one of the contributors, Elisa Palomino-Perez (Note: A link has been removed),

Elisa Palomino-Perez sheepishly admits to believing she was a mermaid as a child. Growing up in Cuenca, Spain in the 1970s and ‘80s, she practiced synchronized swimming and was deeply fascinated with fish. Now, the designer’s love for shiny fish scales and majestic oceans has evolved into an empowering mission, to challenge today’s fashion industry to be more sustainable, by using fish skin as a material.

Luxury fashion is no stranger to the artist, who has worked with designers like Christian Dior, John Galliano and Moschino in her 30-year career. For five seasons in the early 2000s, Palomino-Perez had her own fashion brand, inspired by Asian culture and full of color and embroidery. It was while heading a studio for Galliano in 2002 that she first encountered fish leather: a material made when the skin of tuna, cod, carp, catfish, salmon, sturgeon, tilapia or pirarucu gets stretched, dried and tanned.

The history of using fish leather in fashion is a bit murky. The material does not preserve well in the archeological record, and it’s been often overlooked as a “poor person’s” material due to the abundance of fish as a resource. But Indigenous groups living on coasts and rivers from Alaska to Scandinavia to Asia have used fish leather for centuries. Icelandic fishing traditions can even be traced back to the ninth century. While assimilation policies, like banning native fishing rights, forced Indigenous groups to change their lifestyle, the use of fish skin is seeing a resurgence. Its rise in popularity in the world of sustainable fashion has led to an overdue reclamation of tradition for Indigenous peoples.

In 2017, Palomino-Perez embarked on a PhD in Indigenous Arctic fish skin heritage at London College of Fashion, which is a part of the University of the Arts in London (UAL), where she received her Masters of Arts in 1992. She now teaches at Central Saint Martins at UAL, while researching different ways of crafting with fish skin and working with Indigenous communities to carry on the honored tradition.

Yetikyel’s article is fascinating (apparently Nike has used fish leather in one of its sports shoes) and I encourage you to read her June 11, 2021 article, which also covers the history of fish leather use amongst indigenous peoples of the world.

I did some digging and found a few more stories about fish leather. The earlier one is a Canadian Broadcasting Corporation (CBC) November 16, 2017 online news article by Jane Adey,

Designer Arndis Johannsdottir holds up a stunning purse, decorated with shiny strips of gold and silver leather at Kirsuberjatred, an art and design store in downtown Reykjavik, Iceland.

The purse is one of many in a colourful window display that’s drawing in buyers.

Johannsdottir says customers’ eyes often widen when they discover the metallic material is fish skin. 

Johannsdottir, a fish-skin designing pioneer, first came across the product 35 years ago.

She was working as a saddle smith when a woman came into her shop with samples of fish skin her husband had tanned after the war. Hundreds of pieces had been lying in a warehouse for 40 years.

“Nobody wanted it because plastic came on the market and everybody was fond of plastic,” she said.

“After 40 years, it was still very, very strong and the colours were beautiful and … I fell in love with it immediately.”

Johannsdottir bought all the skins the woman had to offer, gave up saddle making and concentrated on fashionable fish skin.

Adey’s November 16, 2017 article goes on to mention another Icelandic fish leather business looking to make fish leather a fashion staple.

Chloe Williams’s April 28, 2020 article for Hakkai Magazine explores the process of making fish leather and the new interest in making it,

Tracy Williams slaps a plastic cutting board onto the dining room table in her home in North Vancouver, British Columbia. Her friend, Janey Chang, has already laid out the materials we will need: spoons, seashells, a stone, and snack-sized ziplock bags filled with semi-frozen fish. Williams says something in Squamish and then translates for me: “You are ready to make fish skin.”

Chang peels a folded salmon skin from one of the bags and flattens it on the table. “You can really have at her,” she says, demonstrating how to use the edge of the stone to rub away every fiber of flesh. The scales on the other side of the skin will have to go, too. On a sockeye skin, they come off easily if scraped from tail to head, she adds, “like rubbing a cat backwards.” The skin must be clean, otherwise it will rot or fail to absorb tannins that will help transform it into leather.

Williams and Chang are two of a scant but growing number of people who are rediscovering the craft of making fish skin leather, and they’ve agreed to teach me their methods. The two artists have spent the past five or six years learning about the craft and tying it back to their distinct cultural perspectives. Williams, a member of the Squamish Nation—her ancestral name is Sesemiya—is exploring the craft through her Indigenous heritage. Chang, an ancestral skills teacher at a Squamish Nation school, who has also begun teaching fish skin tanning in other BC communities, is linking the craft to her Chinese ancestry.

Before the rise of manufactured fabrics, Indigenous peoples from coastal and riverine regions around the world tanned or dried fish skins and sewed them into clothing. The material is strong and water-resistant, and it was essential to survival. In Japan, the Ainu crafted salmon skin into boots, which they strapped to their feet with rope. Along the Amur River in northeastern China and Siberia, Hezhen and Nivkh peoples turned the material into coats and thread. In northern Canada, the Inuit made clothing, and in Alaska, several peoples including the Alutiiq, Athabascan, and Yup’ik used fish skins to fashion boots, mittens, containers, and parkas. In the winter, Yup’ik men never left home without qasperrluk—loose-fitting, hooded fish skin parkas—which could double as shelter in an emergency. The men would prop up the hood with an ice pick and pin down the edges to make a tent-like structure.

On a Saturday morning, I visit Aurora Skala in Saanich on Vancouver Island, British Columbia, to learn about the step after scraping and tanning: softening. Skala, an anthropologist working in language revitalization, has taken an interest in making fish skin leather in her spare time. When I arrive at her house, a salmon skin that she has tanned in an acorn infusion—a cloudy, brown liquid now resting in a jar—is stretched out on the kitchen counter, ready to be worked.

Skala dips her fingers in a jar of sunflower oil and rubs it on her hands before massaging it into the skin. The skin smells only faintly of fish; the scent reminds me of salt and smoke, though the skin has been neither salted nor smoked. “Once you start this process, you can’t stop,” she says. If the skin isn’t worked consistently, it will stiffen as it dries.

Softening the leather with oil takes about four hours, Skala says. She stretches the skin between clenched hands, pulling it in every direction to loosen the fibers while working in small amounts of oil at a time. She’ll also work her skins across other surfaces for extra softening; later, she’ll take this piece outside and rub it back and forth along a metal cable attached to a telephone pole. Her pace is steady, unhurried, soothing. Back in the day, people likely made fish skin leather alongside other chores related to gathering and processing food or fibers, she says. The skin will be done when it’s soft and no longer absorbs oil.

Onto the exhibition.

Futures (November 20, 2021 to July 6, 2022 at the Smithsonian)

A February 24, 2021 Smithsonian Magazine article by Meilan Solly serves as an announcement for the Futures exhibition/festival (Note: Links have been removed),

When the Smithsonian’s Arts and Industries Building (AIB) opened to the public in 1881, observers were quick to dub the venue—then known as the National Museum—America’s “Palace of Wonders.” It was a fitting nickname: Over the next century, the site would go on to showcase such pioneering innovations as the incandescent light bulb, the steam locomotive, Charles Lindbergh’s Spirit of St. Louis and space-age rockets.

“Futures,” an ambitious, immersive experience set to open at AIB this November, will act as a “continuation of what the [space] has been meant to do” from its earliest days, says consulting curator Glenn Adamson. “It’s always been this launchpad for the Smithsonian itself,” he adds, paving the way for later museums as “a nexus between all of the different branches of the [Institution].” …

Part exhibition and part festival, “Futures”—timed to coincide with the Smithsonian’s 175th anniversary—takes its cue from the world’s fairs of the 19th and 20th centuries, which introduced attendees to the latest technological and scientific developments in awe-inspiring celebrations of human ingenuity. Sweeping in scale (the building-wide exploration spans a total of 32,000 square feet) and scope, the show is set to feature historic artifacts loaned from numerous Smithsonian museums and other institutions, large-scale installations, artworks, interactive displays and speculative designs. It will “invite all visitors to discover, debate and delight in the many possibilities for our shared future,” explains AIB director Rachel Goslins in a statement.

“Futures” is split into four thematic halls, each with its own unique approach to the coming centuries. “Futures Past” presents visions of the future imagined by prior generations, as told through objects including Alexander Graham Bell’s experimental telephone, an early android and a full-scale Buckminster Fuller geodesic dome. “In hindsight, sometimes [a prediction is] amazing,” says Adamson, who curated the history-centric section. “Sometimes it’s sort of funny. Sometimes it’s a little dismaying.”

Futures That Work” continues to explore the theme of technological advancement, but with a focus on problem-solving rather than the lessons of the past. Climate change is at the fore of this section, with highlighted solutions ranging from Capsula Mundi’s biodegradable burial urns to sustainable bricks made out of mushrooms and purely molecular artificial spices that cut down on food waste while preserving natural resources.

Futures That Inspire,” meanwhile, mimics AIB’s original role as a place of wonder and imagination. “If I were bringing a 7-year-old, this is probably where I would take them first,” says Adamson. “This is where you’re going to be encountering things that maybe look a bit more like science fiction”—for instance, flying cars, self-sustaining floating cities and Afrofuturist artworks.

The final exhibition hall, “Futures That Unite,” emphasizes human relationships, discussing how connections between people can produce a more equitable society. Among others, the list of featured projects includes (Im)possible Baby, a speculative design endeavor that imagines what same-sex couples’ children might look like if they shared both parents’ DNA, and Not The Only One (N’TOO), an A.I.-assisted oral history project. [all emphases mine]

I haven’t done justice to Solly’s February 24, 2021 article, which features embedded images and offers a more hopeful view of the future than is currently the fashion.

Futures asks: Would you like to plan the future?

Nate Berg’s November 22, 2021 article for Fast Company features an interactive urban planning game that’s part of the Futures exhibition/festival,

The Smithsonian Institution wants you to imagine the almost ideal city block of the future. Not the perfect block, not utopia, but the kind of urban place where you get most of what you want, and so does everybody else.

Call it urban design by compromise. With a new interactive multiplayer game, the museum is hoping to show that the urban spaces of the future can achieve mutual goals only by being flexible and open to the needs of other stakeholders.

The game is designed for three players, each in the role of either the city’s mayor, a real estate developer or an ecologist. The roles each have their own primary goals – the mayor wants a well-served populace, the developer wants to build successful projects, and the ecologist wants the urban environment to coexist with the natural environment. Each role takes turns adding to the block, either in discrete projects or by amending what another player has contributed. Options are varied, but include everything from traditional office buildings and parks to community centers and algae farms. The players each try to achieve their own goals on the block, while facing the reality that other players may push the design in unexpected directions. These tradeoffs and their impact on the block are explained by scores on four basic metrics: daylight, carbon footprint, urban density, and access to services. How each player builds onto the block can bring scores up or down.

To create the game, the Smithsonian teamed up with Autodesk, the maker of architectural design tools like AutoCAD, an industry standard. Autodesk developed a tool for AI-based generative design that offers up options for a city block’s design, using computing power to make suggestions on what could go where and how aiming to achieve one goal, like boosting residential density, might detract from or improve another set of goals, like creating open space. “Sometimes you’ll do something that you think is good but it doesn’t really help the overall score,” says Brian Pene, director of emerging technology at Autodesk. “So that’s really showing people to take these tradeoffs and try attributes other than what achieves their own goals.” The tool is meant to show not how AI can generate the perfect design, but how the differing needs of various stakeholders inevitably require some tradeoffs and compromises.

Futures online and in person

Here are links to Futures online and information about visiting in person,

For its 175th anniversary, the Smithsonian is looking forward.

What do you think of when you think of the future? FUTURES is the first building-wide exploration of the future on the National Mall. Designed by the award-winning Rockwell Group, FUTURES spans 32,000 square feet inside the Arts + Industries Building. Now on view until July 6, 2022, FUTURES is your guide to a vast array of interactives, artworks, technologies, and ideas that are glimpses into humanity’s next chapter. You are, after all, only the latest in a long line of future makers.

Smell a molecule. Clean your clothes in a wetland. Meditate with an AI robot. Travel through space and time. Watch water being harvested from air. Become an emoji. The FUTURES is yours to decide, debate, delight. We invite you to dream big, and imagine not just one future, but many possible futures on the horizon—playful, sustainable, inclusive. In moments of great change, we dare to be hopeful. How will you create the future you want to live in?

Happy New Year!

Crowdsourcing brain research at Princeton University to discover 6 new neuron types

Spritely music!

There were already 1/4M registered players as of May 17, 2018 but I’m sure there’s room for more should you be inspired. A May 17, 2018 Princeton University news release (also on EurekAlert) reveals more about the game and about the neurons,

With the help of a quarter-million video game players, Princeton researchers have created and shared detailed maps of more than 1,000 neurons — and they’re just getting started.

“Working with Eyewirers around the world, we’ve made a digital museum that shows off the intricate beauty of the retina’s neural circuits,” said Sebastian Seung, the Evnin Professor in Neuroscience and a professor of computer science and the Princeton Neuroscience Institute (PNI). The related paper is publishing May 17 [2018] in the journal Cell.

Seung is unveiling the Eyewire Museum, an interactive archive of neurons available to the general public and neuroscientists around the world, including the hundreds of researchers involved in the federal Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative.

“This interactive viewer is a huge asset for these larger collaborations, especially among people who are not physically in the same lab,” said Amy Robinson Sterling, a crowdsourcing specialist with PNI and the executive director of Eyewire, the online gaming platform for the citizen scientists who have created this data set.

“This museum is something like a brain atlas,” said Alexander Bae, a graduate student in electrical engineering and one of four co-first authors on the paper. “Previous brain atlases didn’t have a function where you could visualize by individual cell, or a subset of cells, and interact with them. Another novelty: Not only do we have the morphology of each cell, but we also have the functional data, too.”

The neural maps were developed by Eyewirers, members of an online community of video game players who have devoted hundreds of thousands of hours to painstakingly piecing together these neural cells, using data from a mouse retina gathered in 2009.

Eyewire pairs machine learning with gamers who trace the twisting and branching paths of each neuron. Humans are better at visually identifying the patterns of neurons, so every player’s moves are recorded and checked against each other by advanced players and Eyewire staffers, as well as by software that is improving its own pattern recognition skills.

Since Eyewire’s launch in 2012, more than 265,000 people have signed onto the game, and they’ve collectively colored in more than 10 million 3-D “cubes,” resulting in the mapping of more than 3,000 neural cells, of which about a thousand are displayed in the museum.

Each cube is a tiny subset of a single cell, about 4.5 microns across, so a 10-by-10 block of cubes would be the width of a human hair. Every cell is reviewed by between 5 and 25 gamers before it is accepted into the system as complete.

“Back in the early years it took weeks to finish a single cell,” said Sterling. “Now players complete multiple neurons per day.” The Eyewire user experience stays focused on the larger mission — “For science!” is a common refrain — but it also replicates a typical gaming environment, with achievement badges, a chat feature to connect with other players and technical support, and the ability to unlock privileges with increasing skill. “Our top players are online all the time — easily 30 hours a week,” Sterling said.

Dedicated Eyewirers have also contributed in other ways, including donating the swag that gamers win during competitions and writing program extensions “to make game play more efficient and more fun,” said Sterling, including profile histories, maps of player activity, a top 100 leaderboard and ever-increasing levels of customizability.

“The community has really been the driving force behind why Eyewire has been successful,” Sterling said. “You come in, and you’re not alone. Right now, there are 43 people online. Some of them will be admins from Boston or Princeton, but most are just playing — now it’s 46.”

For science!

With 100 billion neurons linked together via trillions of connections, the brain is immeasurably complex, and neuroscientists are still assembling its “parts list,” said Nicholas Turner, a graduate student in computer science and another of the co-first authors. “If you know what parts make up the machine you’re trying to break apart, you’re set to figure out how it all works,” he said.

The researchers have started by tackling Eyewire-mapped ganglion cells from the retina of a mouse. “The retina doesn’t just sense light,” Seung said. “Neural circuits in the retina perform the first steps of visual perception.”

The retina grows from the same embryonic tissue as the brain, and while much simpler than the brain, it is still surprisingly complex, Turner said. “Hammering out these details is a really valuable effort,” he said, “showing the depth and complexity that exists in circuits that we naively believe are simple.”

The researchers’ fundamental question is identifying exactly how the retina works, said Bae. “In our case, we focus on the structural morphology of the retinal ganglion cells.”

“Why the ganglion cells of the eye?” asked Shang Mu, an associate research scholar in PNI and fellow first author. “Because they’re the connection between the retina and the brain. They’re the only cell class that go back into the brain.” Different types of ganglion cells are known to compute different types of visual features, which is one reason the museum has linked shape to functional data.

Using Eyewire-produced maps of 396 ganglion cells, the researchers in Seung’s lab successfully classified these cells more thoroughly than has ever been done before.

“The number of different cell types was a surprise,” said Mu. “Just a few years ago, people thought there were only 15 to 20 ganglion cell types, but we found more than 35 — we estimate between 35 and 50 types.”

Of those, six appear to be novel, in that the researchers could not find any matching descriptions in a literature search.

A brief scroll through the digital museum reveals just how remarkably flat the neurons are — nearly all of the branching takes place along a two-dimensional plane. Seung’s team discovered that different cells grow along different planes, with some reaching high above the nucleus before branching out, while others spread out close to the nucleus. Their resulting diagrams resemble a rainforest, with ground cover, an understory, a canopy and an emergent layer overtopping the rest.

All of these are subdivisions of the inner plexiform layer, one of the five previously recognized layers of the retina. The researchers also identified a “density conservation principle” that they used to distinguish types of neurons.

One of the biggest surprises of the research project has been the extraordinary richness of the original sample, said Seung. “There’s a little sliver of a mouse retina, and almost 10 years later, we’re still learning things from it.”

Of course, it’s a mouse’s brain that you’ll be examining and while there are differences between a mouse brain and a human brain, mouse brains still provide valuable data as they did in the case of some groundbreaking research published in October 2017. James Hamblin wrote about it in an Oct. 7, 2017 article for The Atlantic (Note: Links have been removed),

 

Scientists Somehow Just Discovered a New System of Vessels in Our Brains

It is unclear what they do—but they likely play a central role in aging and disease.

A transparent model of the brain with a network of vessels filled in
Daniel Reich / National Institute of Neurological Disorders and Stroke

You are now among the first people to see the brain’s lymphatic system. The vessels in the photo above transport fluid that is likely crucial to metabolic and inflammatory processes. Until now, no one knew for sure that they existed.

Doctors practicing today have been taught that there are no lymphatic vessels inside the skull. Those deep-purple vessels were seen for the first time in images published this week by researchers at the U.S. National Institute of Neurological Disorders and Stroke.

In the rest of the body, the lymphatic system collects and drains the fluid that bathes our cells, in the process exporting their waste. It also serves as a conduit for immune cells, which go out into the body looking for adversaries and learning how to distinguish self from other, and then travel back to lymph nodes and organs through lymphatic vessels.

So how was it even conceivable that this process wasn’t happening in our brains?

Reich (Daniel Reich, senior investigator) started his search in 2015, after a major study in Nature reported a similar conduit for lymph in mice. The University of Virginia team wrote at the time, “The discovery of the central-nervous-system lymphatic system may call for a reassessment of basic assumptions in neuroimmunology.” The study was regarded as a potential breakthrough in understanding how neurodegenerative disease is associated with the immune system.

Around the same time, researchers discovered fluid in the brains of mice and humans that would become known as the “glymphatic system.” [emphasis mine] It was described by a team at the University of Rochester in 2015 as not just the brain’s “waste-clearance system,” but as potentially helping fuel the brain by transporting glucose, lipids, amino acids, and neurotransmitters. Although since “the central nervous system completely lacks conventional lymphatic vessels,” the researchers wrote at the time, it remained unclear how this fluid communicated with the rest of the body.

There are occasional references to the idea of a lymphatic system in the brain in historic literature. Two centuries ago, the anatomist Paolo Mascagni made full-body models of the lymphatic system that included the brain, though this was dismissed as an error. [emphases mine]  A historical account in The Lancet in 2003 read: “Mascagni was probably so impressed with the lymphatic system that he saw lymph vessels even where they did not exist—in the brain.”

I couldn’t resist the reference to someone whose work had been dismissed summarily being proved right, eventually, and with the help of mouse brains. Do read Hamblin’s article in its entirety if you have time as these excerpts don’t do it justice.

Getting back to Princeton’s research, here’s their research paper,

Digital museum of retinal ganglion cells with dense anatomy and physiology,” by Alexander Bae, Shang Mu, Jinseop Kim, Nicholas Turner, Ignacio Tartavull, Nico Kemnitz, Chris Jordan, Alex Norton, William Silversmith, Rachel Prentki, Marissa Sorek, Celia David, Devon Jones, Doug Bland, Amy Sterling, Jungman Park, Kevin Briggman, Sebastian Seung and the Eyewirers, was published May 17 in the journal Cell with DOI 10.1016/j.cell.2018.04.040.

The research was supported by the Gatsby Charitable Foundation, National Institute of Health-National Institute of Neurological Disorders and Stroke (U01NS090562 and 5R01NS076467), Defense Advanced Research Projects Agency (HR0011-14-2- 0004), Army Research Office (W911NF-12-1-0594), Intelligence Advanced Research Projects Activity (D16PC00005), KT Corporation, Amazon Web Services Research Grants, Korea Brain Research Institute (2231-415) and Korea National Research Foundation Brain Research Program (2017M3C7A1048086).

This paper is behind a paywall. For the players amongst us, here’s the Eyewire website. Go forth,  play, and, maybe, discover new neurons!

SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) and their art gallery from Aug. 12 – 16, 2018 (making the world synthetic) in Vancouver (Canada)

While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.

Introduction

For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),

Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.

Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.[1] The recent paper acceptance rate for SIGGRAPH has been less than 26%.[2] The submitted papers are peer-reviewed in a single-blind process.[3] There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress.[4][5] …

This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014.  The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,

While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.

“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”

SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”

That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.

Previews

I have four items about work that will be presented at SIGGRAPH 2018, First up, something about ‘redirected walking’ from a June 18, 2018 Association for Computing Machinery news release on EurekAlert,

CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.

All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.

“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”

Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.

The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.

The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”

The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.

Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.

###

About ACM, ACM SIGGRAPH, and SIGGRAPH 2018

ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.

They have provided an image illustrating what they mean (I don’t find it especially informative),

Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn

Next up (2), there’s Disney Corporation’s first virtual reality (VR) short, from a July 19, 2018  Association for Computing Machinery news release on EurekAlert,

Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.

Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.

“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”

For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.

SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.

“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”

This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”

Apparently this is a still from the ‘short’,

Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios

There’s also something (3) from Google as described in a July 26, 2018 Association of Computing Machinery news release on EurekAlert,

Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.

Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.

“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”

To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.

Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec

to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.

The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)

Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.

Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.

“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”

I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,

Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck

Finally (4), ‘synthesizing realistic sounds’ is announced in an Aug. 6, 2018 Stanford University (US) news release (also on EurekAlert) by Taylor Kubota,

Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.

“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”

The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.

“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”

Predicting sound

Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.

Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.

“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.

The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.

Challenges ahead

In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.

And, even in its current state, the results are worth the wait.

“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”

Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.

Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.

Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,

The researchers have also provided this image,

By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)

It does seem like we’re synthesizing the world around us, eh?

The SIGGRAPH 2018 art gallery

Here’s what SIGGRAPH had to say about its 2018 art gallery in Vancouver and the themes for the conference and the gallery (from a May 18, 2018 Associating for Computing Machinery news release on globalnewswire.com (also on this 2018 SIGGRAPH webpage),

SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.

The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.

Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”

He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”

Highlights from the 2018 Art Gallery include:

Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver

TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.

Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara

Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”

Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University

Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.

In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.

The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.

To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.

“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.

Art Papers highlights include:

Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth

This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.

Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong

The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.

Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University

“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.

What’s the what?

My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.

Game design for scientific participation

Thanks to David Bruggeman for his Feb. 13, 2014 post (on the Pasco Phronesis blog) about a US National Science Foundation (NSF) webinar on designing scientific games and where he has embedded a video of a mobile game from Cancer Research UK. (His blog is well worth checking out for the information on science entertainment, as well as, his main topic, science policy.)

The upcoming NSF webinar is titled, From World of Warcraft to Fold.it and Beyond; The Opportunities & Challenges to Designing Games for Scientific Participation and will be held on Friday, Feb. 21, 2014 (1 hr.),

February 21, 2014 12:00 PM  to  February 21, 2014 1:00 PM
NSF Room 110

Designing Disruptive Learning Technologies Webinar Series

Kurt Squire – University of Wisconsin-Madison

Abstract:

Digital games like World of Warcraft and Fold.it are compelling examples of how technology can engage thousands of learners in solving complex problems — even in making scientific discoveries. But what does it take to foster learning in the midst of such enthusiastic engagement? In this presentation, I will draw from a decade of research in how people learn and interact in online gaming environments and present findings from our work designing online environments for science learning. I will present pedagogical models for integrating gaming technologies into classrooms and research exploring how these games work for learning. Both the potential of games for science learning and challenges for leveraging gaming technologies at scale will be presented, as well as implications for further research on how people learn.

Bio:

Kurt Squire is a Romnes Professor in Digital Media in Curriculum and Instruction at the University of Wisconsin-Madison and Director of the Games+Learning+Society Theme at the Wisconsin Institute for Discovery. Squire is also a co-founder and Vice President of Research for the Learning Games Network, a non-profit network expanding the role of games and learning. Squire is an internationally recognized leader in digital media in technology and has delivered dozens of invited addresses across Europe, Asia, and North America and written over 75 scholarly articles on digital media and education. Squire’s research investigates the potential of digital game-based technologies for learning, and has resulted in several software projects including ARIS, Virulent, Citizen Science, among others. Squire is the recipient of an NSF CAREER grant, and grants from the NSF, Gates Foundation, MacArthur Foundation, AMD Foundation, Microsoft, Data Recognition Corporation and others. Squire was also a co-founder of Joystick101.org, and for several years wrote a column with Henry Jenkins for Computer Games magazine.

Webinar

The Webinar will be held from 12:00pm to 1:00pm Eastern Time on Friday, Feburary 21, 2014.

Please register at https://nsf.webex.com/nsf/j.php?ED=239652927&RG=1&UID=0&RT=MiMxMQ%3D%3D  by 11:59pm Eastern Time on Thursday, February 20, 2014.

After your registration is accepted, you will receive an email with a URL to join the meeting. Please be sure to join a few minutes before the start of the webinar. This system does not establish a voice connection on your computer; instead, your acceptance message will have a toll-free phone number that you will be prompted to call after joining. In the event the number of requests exceeds the capacity, some requests may have to be denied.

This event is part of Webinars/Webcasts.

Meeting Type
Webcast

Contacts
Natalie Harr, (703) 292-8930, nharr@nsf.gov

Good luck with your registration.  This webinar does seem to be open internationally although I imagine priority will be given to registrants located in the US.

2013 International Science & Engineering Visualization Challenge Winners

Thanks to a RT from @coreyspowell I stumbled across a Feb. 7, 2014 article in Science (magazine) describing the 2013 International Science & Engineering Visualization Challenge Winners. I am highlighting a few of the entries here but there are more images in the article and a slideshow.

First Place: Illustration

Credit: Greg Dunn and Brian Edwards, Greg Dunn Design, Philadelphia, Pennsylvania; Marty Saggese, Society for Neuroscience, Washington, D.C.; Tracy Bale, University of Pennsylvania, Philadelphia; Rick Huganir, Johns Hopkins University, Baltimore, Maryland

Cortex in Metallic Pastels. Credit: Greg Dunn and Brian Edwards, Greg Dunn Design, Philadelphia, Pennsylvania; Marty Saggese, Society for Neuroscience, Washington, D.C.; Tracy Bale, University of Pennsylvania, Philadelphia; Rick Huganir, Johns Hopkins University, Baltimore, Maryland

From the article, a description of Greg Dunn and his work,

With a Ph.D. in neuroscience and a love of Asian art, it may have been inevitable that Greg Dunn would combine them to create sparse, striking illustrations of the brain. “It was a perfect synthesis of my interests,” Dunn says.

Cortex in Metallic Pastels represents a stylized section of the cerebral cortex, in which axons, dendrites, and other features create a scene reminiscent of a copse of silver birch at twilight. An accurate depiction of a slice of cerebral cortex would be a confusing mess, Dunn says, so he thins out the forest of cells, revealing the delicate branching structure of each neuron.

Dunn blows pigments across the canvas to create the neurons and highlights some of them in gold leaf and palladium, a technique he is keen to develop further.

“My eventual goal is to start an art-science lab,” he says. It would bring students of art and science together to develop new artistic techniques. He is already using lithography to give each neuron in his paintings a different angle of reflectance. “As you walk around, different neurons appear and disappear, so you can pack it with information,” he says.

People’s Choice:  Games & Apps

Meta!Blast: The Leaf. Credit: Eve Syrkin Wurtele, William Schneller, Paul Klippel, Greg Hanes, Andrew Navratil, and Diane Bassham, Iowa State University, Ames

Meta!Blast: The Leaf. Credit: Eve Syrkin Wurtele, William Schneller, Paul Klippel, Greg Hanes, Andrew Navratil, and Diane Bassham, Iowa State University, Ames

More from the article,

“Most people don’t expect a whole ecosystem right on the leaf surface,” says Eve Syrkin Wurtele, a plant biologist at Iowa State University. Meta!Blast: The Leaf, the game that Wurtele and her team created, lets high school students pilot a miniature bioship across this strange landscape, which features nematodes and a lumbering tardigrade. They can dive into individual cells and zoom around a chloroplast, activating photosynthesis with their ship’s search lamp. Pilots can also scan each organelle they encounter to bring up more information about it from the ship’s BioLog—a neat way to put plant biology at the heart of an interactive gaming environment.

This is a second recognition for Meta!Blast, which won an Honorable Mention in the 2011 visualization challenge for a version limited to the inside of a plant cell.

The Metablast website homepage describes the game,

The last remaining plant cell in existence is dying. An expert team of plant scientists have inexplicably disappeared. Can you rescue the lost team, discover what is killing the plant, and save the world?

Meta!Blast is a real-time 3D action-adventure game that puts you in the pilot’s seat. Shrink down to microscopic size and explore the vivid, dynamic world of a soybean plant cell spinning out of control. Interact with numerous characters, fight off plant pathogens, and discover how important plants are to the survival of the human race.

Enjoy!

A chance to game the future Sept. 25 and 26, 2013 starting 9 am PDT

Thanks to David Bruggeman at his Pasco Phronesis blog (his Sept. 20, 2013 posting)  for featuring a 36-hour conversation/game (which is recruiting players/participants). It is  called  Innovate 2038  and you do have to pre-register for this game. For anyone who likes a little more information before jumping into to join, here’s what the Innovate 2038 About page has to offer,

About Innovate2038

The traditional paths to research and technology innovation will no longer work for the critical challenges and new opportunities of 2038.

Increasingly constrained resources, the rise of mega-cities, and rapidly shifting developments in business processes, regulations, and consumer sentiment will present epic challenges to business as usual.

At the same time, opportunities will abound. Emerging fields like 3D-printing and additive manufacturing, synthetic biology, and data modeling will catalyze the next generation of products, services, and markets—if research and innovation can lead the way.

But managing all of these research and innovation efforts will require new tools and technologies, new skills in project and talent management, new players and collaborations, and ultimately a collective re-imagining of the value proposition of research to society.

Innovate2038 is a 36-hour global conversation based on IRI’s extensive IRI2038 research project to uncover new ideas and new strategies that can reinvent the very concept of R&D and technology innovation management for the 21st century.

On Sept 25 & 26, 2013, Innovate2038 will take place in corporations, labs, classrooms, but also hacker-spaces, online innovation communities, and networks of researchers and makers, in countries around the world.

Innovate2038 will bring together current leading voices together with emerging and below-the-radar new players that will be increasingly important to the practice of research and innovation.

The platform to support the conversation is itself a signal of the future—a cutting-edge crowdsourced game called Foresight Engine, developed and facilitated by the Institute for the Future. It’s designed to spark new ideas and inspire collaborations among hundreds of people around the world. [emphasis mine]

In as little as five minutes, you can log on to share rapid-fire micro-contributions that will help make the future of research and innovation heading out to 2038.
For a day and half, you can compete to win points, achieve awards, and gain the recognition of the leading thinkers in technology management today.

Pre-register right now as a ‘game insider’ to be the first to know about the game leading up to the Sept 25 launch.

David notes that this ’36-hour conversation/game is part of a larger project, from his Sept. 20, 2013 Pasco Phronesis posting (Note: Links have been removed),

… This is part of the Industrial Research Institute’s project on 2038 Future, which focuses on the art and science of research and development management.  That project has involved possible future scenarios, retrospective examinations of research management, and scanning the current environment.  The game engine was developed by the Institute for the Future, and is called the Foresight Engine.  The basics of the engine encourage participants to contribute short ideas, with points going to those ideas that get approved and/or built on by other participants.

Here’s more about the  Industrial Research Institute (IRI) from their History webpage,

Fourteen companies comprised the original membership of the Institute when it was formed in 1938, under the auspices of the [US] National Research Council (NRC). Four of these companies retain membership today: Colgate-Palmolive Company, Procter & Gamble Company, Hercules Powder Company (now Ashland, Inc.), and UOP, LLC, formerly known as Universal Oil Products (acquired by Honeywell). Four of the first five presidents were from the six charter-member-company category.

Maurice Holland, then Director of the NRC Division of Engineering, was largely responsible for bringing together about 50 representatives from industry, government, and universities to an initial organizational meeting in February 1938 in New York City. The Institute was an integral part of the National Research Council until 1945, when it separated to become a non-profit membership corporation in the State of New York. However, association with the Council continues unbroken.

At the founding meeting, several speakers stressed the need for an association of research directors–something different from the usual technical society–and that the benefits to be derived would depend on the extent of cooperation by its members. The greatest advantage, they said, would come through personal contacts with members of the group–still a major characteristic of IRI.

In more recent years, the activities of the Association have broadened considerably. IRI now offers services to the full range of R&D and innovation professionals in the United States and abroad.

I went exploring and found this about the game developer, Institute for the Future  (IFTF) on the website’s Who We Are page (Note: Links have been removed),

IFTF is an independent, non-profit research organization with a 45-year track record of helping all kinds of organizations make the futures they want. Our core research staff and creative design studio work together to provide practical foresight for a world undergoing rapid change.

….

Here’s more about the Foresight Engine , the “cutting-edge crowdsourced game,” from the IFTF website’s Collaborative Forecasting Games webpage,

Collaborative Forecasting Games: a crowd’s view of the future

Collaborative forecasting games engage a large and diverse group of people—potentially from around the world—to imagine futures that might go unnoticed by a team of experts. These crowds may include the general public, a targeted sector of the public, or the entire staff of a private organization. And the games themselves can range from futures brainstorming to virtual innovation gameboards and even rich narrative platforms for telling important stories about the future.

Foresight Engine

IFTF has a collaborative forecasting platform called Foresight Engine that makes it easy to set up games without a lot of investment in game design. In the tradition of brainstorming, the platform invites people to play positive or critical ideas about the future and then to build on these ideas to forms chains of discussion—complete with points, awards, and achievements for winning ideas. While the focus of the platform is on Twitter-length ideas of 140 characters or less, a Foresight Engine game does much more than harvest innovative ideas. It builds a literacy among players about the future issues addressed by the game, and it also provides a window on the crowd’s level of understanding of complex futures—laying the foundation for future literacy building. It shows who inspires the greatest following and often surfaces potential thought leaders.

It’s always interesting to dig into an organization’s history (from the IFTF’s History page,

The Institute for the Future has 45 years of forecasts on which to reflect. We’re based in California’s Silicon Valley—a community at the crossroads of technological innovation, social experimentation, and global interchange. Founded in 1968 by a group of former RAND Corporation researchers with a grant from the Ford Foundation to take leading-edge research methodologies into the public and business sectors, IFTF is committed to building the future by understanding it deeply.

I wonder if Innovate 2038 game/conversation will take place in any language other than English? In any event, I just tried to register and couldn’t.  I hope this is a problem on my end rather than the organizers’ as I know how devastating it can be to have your project encounter this kind of roadblock just before launching.

Internship at Science and Technology Innovation Program in Washington, DC

The Woodrow Wilson International Center for Scholars is advertizing for a media-focused intern for Spring 2013. From the Dec. 12, 2012 notice,

The Science and Technology Innovation Program (STIP) at the Woodrow Wilson International Center for Scholars is currently seeking a media-focused intern for Spring 2013. The mission of STIP is to explore the scientific and technological frontier, stimulating discovery and bringing new tools to bear on public policy challenges that emerge as science advances.

Specific project areas include: nanotechnology, synthetic biology, Do-It-Yourself biology, the use of social media in disaster response, serious games, geoengineering, and additive manufacturing. Interns will work closely with a small, interdisciplinary team.

  • Applicants should be a graduate or undergraduate student with a background or strong interest in journalism, science/technology policy, public policy and/or policy analysis.
  • Solid reporting, writing and computer skills are a must. Experience with video/audio editing and new media is strongly desired.
  • Responsibilities include assisting with the website/social media, writing and editing, helping produce and edit short-form videos, staffing events and other duties as assigned.
  • Applicants should be creative, ready to engage in a wide variety of tasks and able to work independently and with a team in a fast-paced environment.
  • The internship is expected to last for 3-5 months at 15-20 hours per week. Scheduling is flexible.
  • Please include 2-3 writing samples/clips and links to any video/documentary work.
  • Compensation may be available.

To apply, please submit a cover letter, resume, and brief writing sample to stipintern@wilsoncenter.org with SPRING 2013 INTERN in the subject line.

There doesn’t seem to be any additional information about the internship on the Wilson Center but you can check for yourself here. Good luck!

RNA (ribonucleic acid) video game

I am a great fan of  Foldit, a protein-folding game I have mentioned several times here (my first posting about Foldit was Aug. 6, 2010) and now via the Foresight Insitute’s July 16, 2012 blog posting, I have discovered an RNA video game (Note: I have removed links),

As we pointed out a few months ago, the greater complexity of folding rules for RNA compared to its chemical cousin DNA gives RNA a greater variety of compact, three-dimensional shapes and a different set of potential functions than is the case with DNA, and this gives RNA nanotechnology a different set of advantages compared to DNA nanotechnology … Proteins have even more complex folding rules and an even greater variety of structures and functions. We also noted here that online gamers playing Foldit topped scientists in redesigning a protein to achieve a novel enzymatic activity that might be especially useful in developing molecular building blocks for molecular manufacturing. Now KurzweilAI.net brings news of an online game that allows players to design RNA molecules …

Here’s more from the KurzwelAI.net June 26, 3012 posting about the new RNA game EteRNA,

EteRNA, an online game with more than 38,000 registered users, allows players to design molecules of ribonucleic acid — RNA — that have the power to build proteins or regulate genes.

EteRNA players manipulate nucleotides, the fundamental building blocks of RNA, to coax molecules into shapes specified by the game.

Those shapes represent how RNA appears in nature while it goes about its work as one of life’s most essential ingredients.

EteRNA was developed by scientists at Stanford and Carnegie Mellon universities, who use the designs created by players to decipher how real RNA works. The game is a direct descendant of Foldit — another science crowdsourcing tool disguised as entertainment — which gets players to help figure out the folding structures of proteins.

Here’s how the EteRNA folks describe this game (from the About EteRNA page),

By playing EteRNA, you will participate in creating the first large-scale library of synthetic RNA designs. Your efforts will help reveal new principles for designing RNA-based switches and nanomachines — new systems for seeking and eventually controlling living cells and disease-causing viruses. By interacting with thousands of players and learning from real experimental feedback, you will be pioneering a completely new way to do science. Join the global laboratory!

The About EteRNA webpage also offers a discussion about RNA,

RNA is often called the “Dark Matter of Biology.” While originally thought to be an unstable cousin of DNA, recent discoveries have shown that RNA can do amazing things. They play key roles in the fundamental processes of life and disease, from protein synthesis and HIV replication, to cellular control. However, the full biological and medical implications of these discoveries is still being worked out.

RNA is made of four nucleotides (A, C,G,and U, which stand for adenine, cytosine, guanine, and uracil). Chemically, each of these building blocks is made of atoms of carbon, oxygen, nitrogen, phosphorus, and hydrogen. When you design RNAs with EteRNA, you’re really creating a chain of these nucleotides.

RNA Nucleotides (from the About EteRNA webpage)

Scientists do not yet understand all of RNA’s roles, but we already know about a large collection of RNAs that are critical for life: (see the Thermus Thermophilus image representing following points)

  1. mRNAs are short copies of a cell’s DNA genome that gets cut up, pasted, spliced, and otherwise remixed before getting translated into proteins.
  1. rRNA forms the core machinery of an ancient machine, the ribosome. This machine synthesizes the proteins of your cells and all living cells, and is the target of most antibiotics.
  2. miRNAs (microRNAs) are short molecules (about 22-letters) that are used by all complex cells as commands for silencing genes and appear to have roles in cancer, heart disease, and other medical problems.
  3. Riboswitches are ubiquitous in bacteria. They sense all sorts of small molecules that could be food or signals from other bacteria, and turn on or off genes by changing their shapes. These are interesting targets for new antibiotics.
  4. Ribozymes are RNAs that can act as enzymes. They catalyze chemical reactions like protein synthesis and RNA splicing, and provide evidence of RNA’s dominance in a primordial stage of Life’s evolution.
  5. Retroviruses, like Hepatitis C, poliovirus, and HIV, are very large RNAs coated with proteins.
  6. And much much more… shRNA, piRNA, snRNA, and other new classes of important RNAs are being discovered every year.

Thermus Thermophilus – Large Subunit Ribosomal RNA
Source: Center for Molecular Biology (downloaded from the About EteRNA webpage)

I do wonder about the wordplay EteRNA/eternal. Are these scientists trying to tell us something?

God from the machine: Deus ex machina and augmentation

Wherever you go, there it is: ancient Greece. Deus Ex, a game series from Eidos Montréal, is likely referencing ‘deus ex machina’, a term applied to a theatrical device (in both senses of the word) attributed to  playwrights of ancient Greece. (For anyone who’s unfamiliar with the term, at the end of a play, all of the conflicts would be resolved by a god descending from the heavens. The term refers both to the plot device itself and to the mechanical device used to lower the ‘god’.)

The latest game in the series, Deus Ex: Human Revolution, a role-playing shooter, will be released August 23, 2011. From the August 16, 2011 article by Susan Karlin for Fast Company,

The result—Deus Ex: Human Revolution, a role-playing shooter that comes out August 23–extrapolates MicroTransponder, prosthetics, robotics, and other current augmentation technology into a vision of how technologically enhanced people might gain superhuman abilities and at what cost.

… “We built a timeline that traces the history of augmentation, creating new things, and predicting how would it get out into society. We wanted to ground it in today, and make something where everyone could say, ‘I can see the world going that way.'” [Mary DeMarle, Human Revolution’s lead writer]

Human Revolution, although the third in the series, is a prequel to the original Deus Ex which took place 25 years after Human Revolution.

I’m glad to see games that bring up interesting philosophical questions and possible social impacts of emerging technologies along with the action. In a February 3, 2011 interview with Mary DeMarle, Quintin Smith of Rock, Paper, Shotgun posed these questions,

RPS: Finally, with anti-augmentation groups featuring in Human Revolution, I was just wondering what your own opinions are on human augmentation and human bioengineering are.

MD: Oh, gosh. Well I have to tell you that the joke on the team is that for the duration of this story I’d be supporting the anti-technology view, because most people on the team wouldn’t be anti-technology, and it’d help me make the game more human, you know? And now that the project’s over I bought my first iPad, and I have to admit I’m suddenly like “You know, if I could get one of those InfoLinks in my head, it’d be really useful.”

But you know, all of this stuff is already out there. We already have people putting cameras in their eyes to improve their vision. [emphasis mine] The technology’s there, we’re just not aware of it. As far as our team’s technology expert is concerned, human augmentation’s been going on for decades. If you look at all the sports controversy regarding drugs, that is augmentation. It’s already happening.

RPS: But you have no qualms with our using technology to make ourselves more than we can be?

MD: From my perspective, I think mankind will always try to be more than he is. That’s part of being human. But I do admit we have to be careful about how we do it.

In my February 2, 2010 posting (scroll down about 1/2 way), I featured a quote that resonates with DeMarle’s comments about humans trying to be more,

“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.”

Bailey went on to say that having machinery incorporated into his body made him feel “above human”.

As for cameras being implanted in eyes to improve vision, I would be delighted to hear from anyone who has information about this. The only project I could find in my search was EyeBorg, a project with a one-eyed Canadian filmmaker who was planning to have a video camera implanted into his eye socket to record images. From the About the Project page,

Take a one eyed film maker, an unemployed engineer, and a vision for something that’s never been done before and you have yourself the EyeBorg Project. Rob Spence and Kosta Grammatis are trying to make history by embedding a video camera and a transmitter in a prosthetic eye. That eye is going in Robs eye socket, and will record the world from a perspective that’s never been seen before.

There are more details about the EyeBorg project in a June 11, 2010 posting by Tim Hornyak for the Automaton blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),

When Canadian filmmaker Rob Spence was a kid, he would peer through the bionic eye of his Six Million Dollar action figure. After a shooting accident left him partially blind, he decided to create his own electronic eye. Now he calls himself Eyeborg.

Spence’s bionic eye contains a battery-powered, wireless video camera. Not only can he record everything he sees just by looking around, but soon people will be able to log on to his video feed and view the world through his right eye.

I don’t know how the Eyeborg project is proceeding as there haven’t been any updates on the site since August 25, 2010.

While I wish Quintin Smith had asked for more details about the science information DeMarle was passing on in the February 3, 2011 interview, I think it’s interesting to note that information about science and technology comes to us in many ways: advertisements, popular television programmes, comic books, interviews, and games, as well as, formal public science outreach programmes through museums and educational institutions.

ETA August 19, 2011: I found some information about visual prosthetics at the European Commission’s Future and Emerging Technologies (FET) website, We can rebuild you page featuring a TEDxVienna November 2010 talk by electrical engineer, Grégoire Cosendai, from the Swiss Federal Institute of Technology. He doesn’t mention the prosthetics until approximately 13 minutes, 25 seconds into the talk. The work is being done to help people with retinitis pigmentosa, a condition that is incurable at this time but it may have implications for others. There are 30 people worldwide in a clinical trial testing a retinal implant that requires the person wear special glasses containing a camera and an antenna. For Star Trek fans, this seems similar to Geordi LaForge‘s special glasses.

ETA Sept. 13, 2011: Better late than never, here’s an excerpt from Dexter Johnson’s Sept. 2, 2011 posting (on his Nanoclast blog at the Institute of Electrical and Electronics Engineers [IEEE] website) about a nano retina project,

The Israel-based company [Nano Retina] is a joint venture between Rainbow Medical and Zyvex Labs, the latter being well known for its work in nanotechnology and its founder Jim Von Ehr, who has been a strong proponent of molecular mechanosynthesis.

It’s well worth contrasting the information in the company video that Dexter provides and the information in the FET video mentioned in the Aug. 19, 2011 update preceding this one. The company presents a vastly more optimistic claim for the vision these implants will provide than one would expect after viewing the information in the FET video about clinical trials, for another similar (to me) system, currently taking place.

Science education for children in Europe, so what’s happening in BC?

I’ve been informally collecting information about children’s science education for a few months when yesterday there was a sudden explosion of articles (well, there were three) on the subject.

First off, a science game was launched by the European Commission titled Power of Research. From the March 2, 2011 news item on Nanowerk,

A new strategy browser game – the “Power of research” – is officially launched. Supported by the European Commission, “Power of Research” has been developed to inspire young Europeans to pursue scientific careers and disseminate interesting up-to-date scientific information. Players assume the role of scientists working in a virtual research environment that replicates the situations that scientists have to deal with in the real world. The game, which can be played for free under www.powerofresearch.eu, is expected to create a large community of more than 100,000 players who will be able to communicate in real time via a state of the art interface.

They really do mean it when they say they’re replicating real life situations but the focus is on medical science research and I don’t think the game title makes that clear. Yes, there are many similarities between the situations that scientists of any stripe encounter in their labs but there are also some significant differences between them. In any event,

In “Power of Research” players can engage in “virtual” health research projects, by performing microscopy, protein isolation and DNA experiments, publishing research results, participating in conferences, managing high tech equipment and staff or request funding – all tasks of real researchers. The decisive game elements are communication, collaboration and competition: players can compete against each other in real time or collaborate to become a successful virtual researcher, win scientific awards or become the leader of a research institute.

The game connects the players to the real world. It is based on up-to-date science content and players work on real world research topics inspired by the FP7 health research programme that will be regularly updated. Popular science events, real research institutes, universities and European health research projects form part of the game. Players also have access to a knowledge platform, where they can search in a virtual library, zoom-into real scientific images and learn more about Nobel Prize laureates. European science institutions and hospitals will have the possibility to contribute to the game and provide details about their research.

I like the immersiveness and the game aspect of this project very much. I do wish they were a little more clear about exactly what kind of research the player will engage in. From the Power of Research About webpage,

Your researcher

* Become a famous researcher in “Power of Research”

* Research different topics through exciting research projects

* Play together with your friends and other players from all over the world

* Earn reputation, win science prizes and more …

* Gain special skills and knowledge in 9 different main research areas (like Brain, Paediatrics, …)

* Become a leader in your institute and lead it to international ranks

* The game is 100% free and needs no prior knowledge

Meanwhile, there are more projects. From the March 2, 2011 news item on physorg.com,

Children who are taught how to think and act like scientists develop a clearer understanding of the subject, a study has shown.

The research project led by The University of Nottingham and The Open University has shown that school children who took the lead in investigating science topics of interest to them gained an understanding of good scientific practice.

The study shows that this method of ‘personal inquiry’ could be used to help children develop the skills needed to weigh up misinformation in the media, understand the impact of science and technology on everyday life and help them to make better personal decisions on issues including diet, health and their own effect on the environment.

The three-year project involved providing pupils aged 11 to 14 at Hadden Park High School in Bilborough, Nottingham, and Oakgrove School in Milton Keynes with a new computer toolkit named nQuire, now available as a free download for teachers and schools.

The pupils were given wide themes for their studies but were asked to decide on more specific topics that were of interest to them, including heart rate and fitness, micro climates, healthy eating, sustainability and the effect of noise pollution on birds.

The flexible nature of the toolkit meant that children could become “science investigators”, starting an inquiry in the classroom then collecting data in the playground, at a local nature reserve, or even at home, then sharing and analysing their findings back in class.

Immersive and engaging, yes? I have gone to the nQuire website and while I haven’t downloaded the software, I did successfully log in to the demonstration, in other words, the demonstration is not limited to a UK-based audience.

Meanwhile there’s this project but it seems to be different. It’s spelled differently, INQUIRE, and the focus is on the teachers. From a March 2, 2011 news item on Science Daily,

Thousands of schoolchildren will soon be asking the questions when inquiry-based learning comes to science classrooms across Europe, turning the traditional model of science teaching on its head. The pan-European INQUIRE programme is an exciting new teacher-training initiative delivered by a seventeen-strong consortium of botanic gardens, natural history museums, universities and NGOs.

Coordinated by Innsbruck University Botanic Garden, with support from London-based Botanic Gardens Conservation International (BGCI), INQUIRE is a practical, one-year, continual professional development (CPD) course targeted at qualified teachers working in eleven European countries. Its focus on inquiry-based science education (IBSE) reflects a consensus in the science education community that IBSE methods are more effective than current teaching practices.

Designed to reflect how students actually learn, IBSE also engages them in the process of scientific inquiry. Increasingly it is seen as key to developing their scientific literacy, enhancing their understanding of scientific concepts and heightening their appreciation of how science works. Whereas traditional teaching methods have failed to engage many students, especially in developed countries, IBSE offers outstanding opportunities for effective and enjoyable teaching and learning.

Biodiversity loss and global climate change, among the major scientific as well as political challenges of our age, are core INQUIRE concerns.

That final sentence fragment is a  little puzzling but I believe they’re describing their scientific focus.

My favourite of these projects is one I came across in December 2010 when children from a school in England had a research paper about bees published by the Royal Society’s Biology Letters. You still can access the paper (according to another blogger, Ed Yong, open access would only last to the new year in 2011 but they must have changed their minds). The paper is titled Blackawton bees and lists 30 authors.

1. P. S. Blackawton,
2. S. Airzee,
3. A. Allen,
4. S. Baker,
5. A. Berrow,
6. C. Blair,
7. M. Churchill,
8. J. Coles,
9. R. F.-J. Cumming,
10. L. Fraquelli,
11. C. Hackford,
12. A. Hinton Mellor,
13. M. Hutchcroft,
14. B. Ireland,
15. D. Jewsbury,
16. A. Littlejohns,
17. G. M. Littlejohns,
18. M. Lotto,
19. J. McKeown,
20. A. O’Toole,
21. H. Richards,
22. L. Robbins-Davey,
23. S. Roblyn,
24. H. Rodwell-Lynn,
25. D. Schenck,
26. J. Springer,
27. A. Wishy,
28. T. Rodwell-Lynn,
29. D. Strudwick and
30. R. B. Lotto

This is from the introduction to the paper,

(a) Once upon a time …

People think that humans are the smartest of animals, and most people do not think about other animals as being smart, or at least think that they are not as smart as humans. Knowing that other animals are as smart as us means we can appreciate them more, which could also help us to help them.

If you don’t ever read another science paper in your life, read this one. For the back story on this project, here’s Ed Yong on his Not Exactly Rocket Science blog (a Discover blog) in a December 21, 2010 posting,

“We also discovered that science is cool and fun because you get to do stuff that no one has ever done before.”

This is the conclusion of a new paper published in Biology Letters, a high-powered journal from the UK’s prestigious Royal Society. If its tone seems unusual, that’s because its authors are children from Blackawton Primary School in Devon, England. Aged between 8 and 10, the 25 children have just become the youngest scientists to ever be published in a Royal Society journal.

Their paper, based on fieldwork carried out in a local churchyard, describes how bumblebees can learn which flowers to forage from with more flexibility than anyone had thought. It’s the culmination of a project called ‘i, scientist’, designed to get students to actually carry out scientific research themselves. The kids received some support from Beau Lotto, a neuroscientist at UCL [University College London], and David Strudwick, Blackawton’s head teacher. But the work is all their own.

Yong’s posting features a video of  the  i, scientist project mentioned in the posting, images, and, of course, the rest of the back story.

As it turns out one of my favourite science education/engagement projects is taking place right now (this is based in the UK), I’m a scientist, Get me out of Here!, from their website home page,

I’m a Scientist, Get me out of Here! is an award-winning science enrichment and engagement activity, funded by the Wellcome Trust. It takes place online over a two week period. It’s an X Factor-style competition for scientists, where students are the judges. Scientists and students talk online on this website. They both break down barriers, have fun and learn. But only the students get to vote.

You can view the scientist/student conversations by picking a zone: Argon, Chlorine, Potassium, Forensic, Space, or Stem Cell. The questions the kids ask are fascinating, anything from What’s your favourite colour? to Do you think humans will evolve more? The conversations that ensue can be quite stimulating. This project has been mentioned here before in my June 15, 2010 posting, April 13, 2010 posting (scroll down) and  March 26, 2010 posting (scroll down).

ETA Mar. 3, 2011: The scientists get quite involved and can go to some lengths to win. Here’s Tom Hartley’s video from last year’s (2010) event,

I find the contrast between these kinds of science education/engagement projects in the UK and in Europe and what seems to be a dearth of these in my home province British Columbia (Canada) to be striking. I’ve commented previously on BC’s Year of Science initiative currently taking place in a Dec. 30, 2010 posting where I was commenting on a lack of science culture in Canada. Again, I applaud the initiative while I would urge that in future a less traditional and top/down approach is taken. The Europeans and the British are making science fun by engaging in imaginative and substantive ways. Imagine what getting a paper published in a prestigious science journal does for you (regardless of your age)!