Tag Archives: virtual reality

SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) and their art gallery from Aug. 12 – 16, 2018 (making the world synthetic) in Vancouver (Canada)

While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.

Introduction

For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),

Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.

Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.[1] The recent paper acceptance rate for SIGGRAPH has been less than 26%.[2] The submitted papers are peer-reviewed in a single-blind process.[3] There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress.[4][5] …

This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014.  The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,

While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.

“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”

SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”

That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.

Previews

I have four items about work that will be presented at SIGGRAPH 2018, First up, something about ‘redirected walking’ from a June 18, 2018 Association for Computing Machinery news release on EurekAlert,

CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.

All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.

“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”

Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.

The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.

The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”

The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.

Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.

###

About ACM, ACM SIGGRAPH, and SIGGRAPH 2018

ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.

They have provided an image illustrating what they mean (I don’t find it especially informative),

Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn

Next up (2), there’s Disney Corporation’s first virtual reality (VR) short, from a July 19, 2018  Association for Computing Machinery news release on EurekAlert,

Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.

Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.

“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”

For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.

SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.

“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”

This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”

Apparently this is a still from the ‘short’,

Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios

There’s also something (3) from Google as described in a July 26, 2018 Association of Computing Machinery news release on EurekAlert,

Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.

Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.

“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”

To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.

Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec

to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.

The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)

Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.

Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.

“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”

I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,

Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck

Finally (4), ‘synthesizing realistic sounds’ is announced in an Aug. 6, 2018 Stanford University (US) news release (also on EurekAlert) by Taylor Kubota,

Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.

“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”

The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.

“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”

Predicting sound

Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.

Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.

“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.

The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.

Challenges ahead

In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.

And, even in its current state, the results are worth the wait.

“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”

Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.

Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.

Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,

The researchers have also provided this image,

By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)

It does seem like we’re synthesizing the world around us, eh?

The SIGGRAPH 2018 art gallery

Here’s what SIGGRAPH had to say about its 2018 art gallery in Vancouver and the themes for the conference and the gallery (from a May 18, 2018 Associating for Computing Machinery news release on globalnewswire.com (also on this 2018 SIGGRAPH webpage),

SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.

The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.

Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”

He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”

Highlights from the 2018 Art Gallery include:

Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver

TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.

Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara

Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”

Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University

Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.

In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.

The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.

To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.

“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.

Art Papers highlights include:

Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth

This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.

Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong

The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.

Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University

“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.

What’s the what?

My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.

Canada’s ‘Smart Cities’ will need new technology (5G wireless) and, maybe, graphene

I recently published [March 20, 2018] a piece on ‘smart cities’ both an art/science event in Toronto and a Canadian government initiative without mentioning the necessity of new technology to support all of the grand plans. On that note, it seems the Canadian federal government and two provincial (Québec and Ontario) governments are prepared to invest in one of the necessary ‘new’ technologies, 5G wireless. The Canadian Broadcasting Corporation’s (CBC) Shawn Benjamin reports about Canada’s 5G plans in suitably breathless (even in text only) tones of excitement in a March 19, 2018 article,

The federal, Ontario and Quebec governments say they will spend $200 million to help fund research into 5G wireless technology, the next-generation networks with download speeds 100 times faster than current ones can handle.

The so-called “5G corridor,” known as ENCQOR, will see tech companies such as Ericsson, Ciena Canada, Thales Canada, IBM and CGI kick in another $200 million to develop facilities to get the project up and running.

The idea is to set up a network of linked research facilities and laboratories that these companies — and as many as 1,000 more across Canada — will be able to use to test products and services that run on 5G networks.

Benjamin’s description of 5G is focused on what it will make possible in the future,

If you think things are moving too fast, buckle up, because a new 5G cellular network is just around the corner and it promises to transform our lives by connecting nearly everything to a new, much faster, reliable wireless network.

The first networks won’t be operational for at least a few years, but technology and telecom companies around the world are already planning to spend billions to make sure they aren’t left behind, says Lawrence Surtees, a communications analyst with the research firm IDC.

The new 5G is no tentative baby step toward the future. Rather, as Surtees puts it, “the move from 4G to 5G is a quantum leap.”

In a downtown Toronto soundstage, Alan Smithson recently demonstrated a few virtual reality and augmented reality projects that his company MetaVRse is working on.

The potential for VR and AR technology is endless, he said, in large part for its potential to help hurdle some of the walls we are already seeing with current networks.

Virtual Reality technology on the market today is continually increasing things like frame rates and screen resolutions in a constant quest to make their devices even more lifelike.

… They [current 4G networks] can’t handle the load. But 5G can do so easily, Smithson said, so much so that the current era of bulky augmented reality headsets could be replaced buy a pair of normal looking glasses.

In a 5G world, those internet-connected glasses will automatically recognize everyone you meet, and possibly be able to overlay their name in your field of vision, along with a link to their online profile. …

Benjamin also mentions ‘smart cities’,

In a University of Toronto laboratory, Professor Alberto Leon-Garcia researches connected vehicles and smart power grids. “My passion right now is enabling smart cities — making smart cities a reality — and that means having much more immediate and detailed sense of the environment,” he said.

Faster 5G networks will assist his projects in many ways, by giving planners more, instant data on things like traffic patterns, energy consumption, variou carbon footprints and much more.

Leon-Garcia points to a brightly lit map of Toronto [image embedded in Benjamin’s article] in his office, and explains that every dot of light represents a sensor transmitting real time data.

Currently, the network is hooked up to things like city buses, traffic cameras and the city-owned fleet of shared bicycles. He currently has thousands of data points feeding him info on his map, but in a 5G world, the network will support about a million sensors per square kilometre.

Very exciting but where is all this data going? What computers will be processing the information? Where are these sensors located? Benjamin does not venture into those waters nor does The Economist in a February 13, 2018 article about 5G, the Olympic Games in Pyeonchang, South Korea, but the magazine does note another barrier to 5G implementation,

“FASTER, higher, stronger,” goes the Olympic motto. So it is only appropriate that the next generation of wireless technology, “5G” for short, should get its first showcase at the Winter Olympics  under way in Pyeongchang, South Korea. Once fully developed, it is supposed to offer download speeds of at least 20 gigabits per second (4G manages about half that at best) and response times (“latency”) of below 1 millisecond. So the new networks will be able to transfer a high-resolution movie in two seconds and respond to requests in less than a hundredth of the time it takes to blink an eye. But 5G is not just about faster and swifter wireless connections.

The technology is meant to enable all sorts of new services. One such would offer virtual- or augmented-reality experiences. At the Olympics, for example, many contestants are being followed by 360-degree video cameras. At special venues sports fans can don virtual-reality goggles to put themselves right into the action. But 5G is also supposed to become the connective tissue for the internet of things, to link anything from smartphones to wireless sensors and industrial robots to self-driving cars. This will be made possible by a technique called “network slicing”, which allows operators quickly to create bespoke networks that give each set of devices exactly the connectivity they need.

Despite its versatility, it is not clear how quickly 5G will take off. The biggest brake will be economic. [emphasis mine] When the GSMA, an industry group, last year asked 750 telecoms bosses about the most salient impediment to delivering 5G, more than half cited the lack of a clear business case. People may want more bandwidth, but they are not willing to pay for it—an attitude even the lure of the fanciest virtual-reality applications may not change. …

That may not be the only brake, Dexter Johnson in a March 19, 2018 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website), covers some of the others (Note: Links have been removed),

Graphene has been heralded as a “wonder material” for well over a decade now, and 5G has been marketed as the next big thing for at least the past five years. Analysts have suggested that 5G could be the golden ticket to virtual reality and artificial intelligence, and promised that graphene could improve technologies within electronics and optoelectronics.

But proponents of both graphene and 5G have also been accused of stirring up hype. There now seems to be a rising sense within industry circles that these glowing technological prospects will not come anytime soon.

At Mobile World Congress (MWC) in Barcelona last month [February 2018], some misgivings for these long promised technologies may have been put to rest, though, thanks in large part to each other.

In a meeting at MWC with Jari Kinaret, a professor at Chalmers University in Sweden and director of the Graphene Flagship, I took a guided tour around the Pavilion to see some of the technologies poised to have an impact on the development of 5G.

Being invited back to the MWC for three years is a pretty clear indication of how important graphene is to those who are trying to raise the fortunes of 5G. But just how important became more obvious to me in an interview with Frank Koppens, the leader of the quantum nano-optoelectronic group at Institute of Photonic Sciences (ICFO) just outside of Barcelona, last year.

He said: “5G cannot just scale. Some new technology is needed. And that’s why we have several companies in the Graphene Flagship that are putting a lot of pressure on us to address this issue.”

In a collaboration led by CNIT—a consortium of Italian universities and national laboratories focused on communication technologies—researchers from AMO GmbH, Ericsson, Nokia Bell Labs, and Imec have developed graphene-based photodetectors and modulators capable of receiving and transmitting optical data faster than ever before.

The aim of all this speed for transmitting data is to support the ultrafast data streams with extreme bandwidth that will be part of 5G. In fact, at another section during MWC, Ericsson was presenting the switching of a 100 Gigabits per second (Gbps) channel based on the technology.

“The fact that Ericsson is demonstrating another version of this technology demonstrates that from Ericsson’s point of view, this is no longer just research” said Kinaret.

It’s no mystery why the big mobile companies are jumping on this technology. Not only does it provide high-speed data transmission, but it also does it 10 times more efficiently than silicon or doped silicon devices, and will eventually do it more cheaply than those devices, according to Vito Sorianello, senior researcher at CNIT.

Interestingly, Ericsson is one of the tech companies mentioned with regard to Canada’s 5G project, ENCQOR and Sweden’s Chalmers University, as Dexter Johnson notes, is the lead institution for the Graphene Flagship.. One other fact to note, Canada’s resources include graphite mines with ‘premium’ flakes for producing graphene. Canada’s graphite mines are located (as far as I know) in only two Canadian provinces, Ontario and Québec, which also happen to be pitching money into ENCQOR. My March 21, 2018 posting describes the latest entry into the Canadian graphite mining stakes.

As for the questions I posed about processing power, etc. It seems the South Koreans have found answers of some kind but it’s hard to evaluate as I haven’t found any additional information about 5G and its implementation in South Korea. If anyone has answers, please feel free to leave them in the ‘comments’. Thank you.

Humans can distinguish molecular differences by touch

Yesterday, in my December 18, 2017 post about medieval textiles, I posed the question, “How did medieval artisans create nanoscale and microscale gilding when they couldn’t see it?” I realized afterwards that an answer to that question might be in this December 13, 2017 news item on ScienceDaily,

How sensitive is the human sense of touch? Sensitive enough to feel the difference between surfaces that differ by just a single layer of molecules, a team of researchers at the University of California San Diego has shown.

“This is the greatest tactile sensitivity that has ever been shown in humans,” said Darren Lipomi, a professor of nanoengineering and member of the Center for Wearable Sensors at the UC San Diego Jacobs School of Engineering, who led the interdisciplinary project with V. S. Ramachandran, director of the Center for Brain and Cognition and distinguished professor in the Department of Psychology at UC San Diego.

So perhaps those medieval artisans were able to feel the difference before it could be seen in the textiles they were producing?

Getting back to the matter at hand, a December 13, 2017 University of California at San Diego (UCSD) news release (also on EurekAlert) by Liezel Labios offers more detail about the work,

Humans can easily feel the difference between many everyday surfaces such as glass, metal, wood and plastic. That’s because these surfaces have different textures or draw heat away from the finger at different rates. But UC San Diego researchers wondered, if they kept all these large-scale effects equal and changed only the topmost layer of molecules, could humans still detect the difference using their sense of touch? And if so, how?

Researchers say this fundamental knowledge will be useful for developing electronic skin, prosthetics that can feel, advanced haptic technology for virtual and augmented reality and more.

Unsophisticated haptic technologies exist in the form of rumble packs in video game controllers or smartphones that shake, Lipomi added. “But reproducing realistic tactile sensations is difficult because we don’t yet fully understand the basic ways in which materials interact with the sense of touch.”

“Today’s technologies allow us to see and hear what’s happening, but we can’t feel it,” said Cody Carpenter, a nanoengineering Ph.D. student at UC San Diego and co-first author of the study. “We have state-of-the-art speakers, phones and high-resolution screens that are visually and aurally engaging, but what’s missing is the sense of touch. Adding that ingredient is a driving force behind this work.”

This study is the first to combine materials science and psychophysics to understand how humans perceive touch. “Receptors processing sensations from our skin are phylogenetically the most ancient, but far from being primitive they have had time to evolve extraordinarily subtle strategies for discerning surfaces—whether a lover’s caress or a tickle or the raw tactile feel of metal, wood, paper, etc. This study is one of the first to demonstrate the range of sophistication and exquisite sensitivity of tactile sensations. It paves the way, perhaps, for a whole new approach to tactile psychophysics,” Ramachandran said.

Super-Sensitive Touch

In a paper published in Materials Horizons, UC San Diego researchers tested whether human subjects could distinguish—by dragging or tapping a finger across the surface—between smooth silicon wafers that differed only in their single topmost layer of molecules. One surface was a single oxidized layer made mostly of oxygen atoms. The other was a single Teflon-like layer made of fluorine and carbon atoms. Both surfaces looked identical and felt similar enough that some subjects could not differentiate between them at all.

According to the researchers, human subjects can feel these differences because of a phenomenon known as stick-slip friction, which is the jerking motion that occurs when two objects at rest start to slide against each other. This phenomenon is responsible for the musical notes played by running a wet finger along the rim of a wine glass, the sound of a squeaky door hinge or the noise of a stopping train. In this case, each surface has a different stick-slip frequency due to the identity of the molecules in the topmost layer.

In one test, 15 subjects were tasked with feeling three surfaces and identifying the one surface that differed from the other two. Subjects correctly identified the differences 71 percent of the time.

In another test, subjects were given three different strips of silicon wafer, each strip containing a different sequence of 8 patches of oxidized and Teflon-like surfaces. Each sequence represented an 8-digit string of 0s and 1s, which encoded for a particular letter in the ASCII alphabet. Subjects were asked to “read” these sequences by dragging a finger from one end of the strip to the other and noting which patches in the sequence were the oxidized surfaces and which were the Teflon-like surfaces. In this experiment, 10 out of 11 subjects decoded the bits needed to spell the word “Lab” (with the correct upper and lowercase letters) more than 50 percent of the time. Subjects spent an average of 4.5 minutes to decode each letter.

“A human may be slower than a nanobit per second in terms of reading digital information, but this experiment shows a potentially neat way to do chemical communications using our sense of touch instead of sight,” Lipomi said.

Basic Model of Touch

The researchers also found that these surfaces can be differentiated depending on how fast the finger drags and how much force it applies across the surface. The researchers modeled the touch experiments using a “mock finger,” a finger-like device made of an organic polymer that’s connected by a spring to a force sensor. The mock finger was dragged across the different surfaces using multiple combinations of force and swiping velocity. The researchers plotted the data and found that the surfaces could be distinguished given certain combinations of velocity and force. Meanwhile, other combinations made the surfaces indistinguishable from each other.

“Our results reveal a remarkable human ability to quickly home in on the right combinations of forces and swiping velocities required to feel the difference between these surfaces. They don’t need to reconstruct an entire matrix of data points one by one as we did in our experiments,” Lipomi said.

“It’s also interesting that the mock finger device, which doesn’t have anything resembling the hundreds of nerves in our skin, has just one force sensor and is still able to get the information needed to feel the difference in these surfaces. This tells us it’s not just the mechanoreceptors in the skin, but receptors in the ligaments, knuckles, wrist, elbow and shoulder that could be enabling humans to sense minute differences using touch,” he added.

This work was supported by member companies of the Center for Wearable Sensors at UC San Diego: Samsung, Dexcom, Sabic, Cubic, Qualcomm and Honda.

For those who prefer their news by video,

Here’s a link to and a citation for the paper,

Human ability to discriminate surface chemistry by touch by Cody W. Carpenter, Charles Dhong, Nicholas B. Root, Daniel Rodriquez, Emily E. Abdo, Kyle Skelil, Mohammad A. Alkhadra, Julian Ramírez, Vilayanur S. Ramachandran and Darren J. Lipomi. Mater. Horiz., 2018, Advance Article DOI: 10.1039/C7MH00800G

This paper is open access but you do need to have opened a free account on the website.

Mathematicians get illustrative

Frank A. Farris, an associate Professor of Mathematics at Santa Clara University (US), writes about the latest in mathematicians and data visualization in an April 4, 2017 essay on The Conversation (Note: Links have been removed),

Today, digital tools like 3-D printing, animation and virtual reality are more affordable than ever, allowing mathematicians to investigate and illustrate their work at the same time. Instead of drawing a complicated surface on a chalkboard, we can now hand students a physical model to feel or invite them to fly over it in virtual reality.

Last year, a workshop called “Illustrating Mathematics” at the Institute for Computational and Experimental Research in Mathematics (ICERM) brought together an eclectic group of mathematicians and digital art practitioners to celebrate what seems to be a golden age of mathematical visualization. Of course, visualization has been central to mathematics since Pythagoras, but this seems to be the first time it had a workshop of its own.

Visualization plays a growing role in mathematical research. According to John Sullivan at the Technical University of Berlin, mathematical thinking styles can be roughly categorized into three groups: “the philosopher,” who thinks purely in abstract concepts; “the analyst,” who thinks in formulas; and “the geometer,” who thinks in pictures.

Mathematical research is stimulated by collaboration between all three types of thinkers. Many practitioners believe teaching should be calibrated to connect with different thinking styles.

Borromean Rings, the logo of the International Mathematical Union. John Sullivan

Sullivan’s own work has benefited from images. He studies geometric knot theory, which involves finding “best” configurations. For example, consider his Borromean rings, which won the logo contest of the International Mathematical Union several years ago. The rings are linked together, but if one of them is cut, the others fall apart, which makes it a nice symbol of unity.

Apparently this new ability to think mathematics visually has influenced mathematicians in some unexpected ways,

Take mathematician Fabienne Serrière, who raised US$124,306 through Kickstarter in 2015 to buy an industrial knitting machine. Her dream was to make custom-knit scarves that demonstrate cellular automata, mathematical models of cells on a grid. To realize her algorithmic design instructions, Serrière hacked the code that controls the machine. She now works full-time on custom textiles from a Seattle studio.

In this sculpture by Edmund Harriss, the drill traces are programmed to go perpendicular to the growth rings of the tree. This makes the finished sculpture a depiction of a concept mathematicians know as ‘paths of steepest descent.’ Edmund Harriss, Author provided

Edmund Harriss of the University of Arkansas hacked an architectural drilling machine, which he now uses to make mathematical sculptures from wood. The control process involves some deep ideas from differential geometry. Since his ideas are basically about controlling a robot arm, they have wide application beyond art. According to his website, Harriss is “driven by a passion to communicate the beauty and utility of mathematical thinking.”

Mathematical algorithms power the products made by Nervous System, a studio in Massachusetts that was founded in 2007 by Jessica Rosenkrantz, a biologist and architect, and Jess Louis-Rosenberg, a mathematician. Many of their designs, for things like custom jewelry and lampshades, look like naturally occurring structures from biology or geology.

Farris’ essay is a fascinating look at mathematics and data visualization.

Science, technology, engineering, arts, and mathematics (STEAM) for the Canada Science and Technology Museums Corporation gala on May 17, 2017

The Canada National Science and Technology Museums Corporation (CSTMC) gala is known officially as the National Science and Innovation Gala according to a May 11, 2017 announcement (received via email),

FULL STEAM AHEAD TO THE NATIONAL SCIENCE AND INNOVATION GALA

LET’S TALK STEAM
Demonstrating Canada’s commitment to a vibrant, national science
culture, the evening’s panel brings together influencers from the
private and public sectors to discuss the importance of education in the
STEAM (science, technology, engineering, arts, mathematics) fields.

FAMILIAR FACES
Experience a whimsical and wonderful evening hosted by CBC News
Network’s Heather Hiscox. Join her for the presentation of the first
ever STEAM Horizon Awards.

APPETITE FOR INNOVATION
From virtual reality to wearable technologies, the innovation is so real
you can taste it.  Chef Michael Blackie’s culinary creations will
underscore the spirit of ingenuity with a refined but approachable menu.
Prepare your taste buds to savour food and beverages that will fuel your
body and mind.

TIME IS RUNNING OUT. BUY YOUR TICKETS TODAY! [3]

[4]

À TOUTE VAPEUR VERS LE GALA NATIONAL DES SCIENCES ET DE L’INNOVATION

PARLONS STIAM
Témoignant de l’engagement du Canada à créer une culture
scientifique dynamique à l’échelle du pays, le groupe d’experts
invité rassemblera des gens d’influence issus des secteurs privé et
public, afin qu’ils discutent de l’importance de l’éducation dans
les domaines des STIAM (sciences, technologies, ingénierie, arts et
mathématiques).

VISAGES FAMILIERS
Venez vivre l’expérience d’une soirée empreinte de fantaisie et de
merveilleux qu’animera Heather Hiscox, lectrice de nouvelles au
réseau CBC News Network. Assistez à la remise des tout premiers prix
Horizon STIAM.

LE GOÛT DE L’INNOVATION
De la réalité virtuelle aux technologies portables, l’innovation est
si réelle qu’on peut même y goûter. Les créations culinaires du
chef Michael Blackie illustrent cet esprit d’ingéniosité dans un
menu raffiné et invitant. Préparez vos papilles à savourer mets et
boissons qui nourriront votre corps et votre esprit.

LE TEMPS COMMENCE À MANQUER! ACHETEZ VOS BILLETS DÈS MAINTENANT! [5]

THANK YOU TO OUR SPONSORS
MERCI À NOS COMMANDITAIRES

Logistics (from the CSTMC’s gala event page),

WHAT DO YOU NEED TO KNOW?

  • Date: May 17, 2017
  • Time: Doors open at 5:30 p.m.
  • Location: Canada Aviation and Space Museum
  • Dress Code: Semi-formal. Guests are encouraged to add a Steampunk twist to their outfits.

Your ticket includes gourmet food, one drink ticket, entertainment, music performed by a Steampunk DJ, coat check and parking.

Tickets: $150 per person, $1250 for a group of 10.

The email didn’t quite convey the flavour of the gala,

What can you expect?

Heather Hiscox

Familiar Faces

Experience a whimsical and wonderful evening hosted by CBC [Canadian Broadcasting Corporation] News Network’s Heather Hiscox. Join her for the presentation of the first ever STEAM Horizon Awards.

Let’s Talk STEAM

Demonstrating Canada’s commitment to a vibrant, national science culture, the evening’s panel brings together influencers from the private and public sectors [emphasis mine] to discuss the importance of education in the STEAM (science, technology, engineering, arts, mathematics) fields. The panel will exchange insights on a wide-range of topics, including Canadian youth, women and girls in STEAM, and the imperative for coming generations of Canadians to embrace the fields of science and technology.

Let's Talk STEAM
appetite for innovation

Appetite for Innovation

From virtual reality to wearable technologies, the innovation is so real you can taste it. Chef Michael Blackie’s culinary creations will underscore the spirit of ingenuity with a refined but approachable menu. Prepare your taste buds to savour food and beverages that will fuel your body and mind.

Steampunk Factory

Be dazzled by technological wonders spread over different zones as you explore interactive installations developed by leading-edge industry partners and teams from local universities and colleges. From virtual reality to wearable technologies, get a hands-on look at the technologies of tomorrow − steampunk style!

Steampunk Factory
Future-VR

Virtual Reality

Do you have what it takes to be a steampunk aviator or train engineer? Test your skills and open up your mind to new horizons in our aviation simulators and virtual reality environments. If art and design are more your style, our virtual art exhibit will give all new meaning to abstract.

Autonomous Vehicles

Race your drones to the finish line or try your hand at controlling a rover developed to withstand the rigours of Mars. You are no longer required to leave your seat in order to take to the skies or visit other planets!

Autonomous Vehicles
Flying Time Machine

Wonderful Flying Time Machine

Travel back in time aboard the Wonderful Flying Time Machine equipped with a photo booth to make sure you capture the moment in time!

STEAM Horizon Awards

Amidst the wonders and whimsy of the Steampunk soiree, the Gala will also be host to the first ever STEAM Horizon Awards. Funded by the Canada Science and Technology Museums Corporation Foundation and six founding partners, the awards celebrate the important contributions of Canada’s youth in the fields of science, technology, engineering, arts, and math (STEAM). The seven winners, hailing from across Canada, have been invited to the Gala where they will be recognized for their individual achievements and receive a $25 000 prize to go towards their post-secondary education.

STEAM Horizon Awards
robotics

Robotics

Get acquainted with young innovators and their robot inventions. From flying machines to robot dogs, these whimsical inventions offer a peek into the automated future.

Networking

Spend the night mingling with industry innovators and academics alike as we honour the achievements of young Canadians in science, technology, engineering, arts, and math. Take advantage of this opportunity to connect with influential Canadians in STEAM industries in business and government.

networking
Roving Steampunk Performers

Roving Steampunk Performers

From stilt walkers to illusionists, experience a steampunk spectacle like no other as larger than life entertainers present a magical escape from the modern world.

Wearable Technology Fashion Show

Lights, camera, fashion! Enjoy a unique wearable technology fashion show where innovation meets performance and theatre. A collaboration between a number of Canada’s leading wearable technology companies and young innovators, this fashion show will take you to another world − or era!

Wearable Tech
DJ and Dancing

Do the Robot

Let off some steam and dance the night away amid a unique scene of motion and sound as robotic dancers come to life powered by the music of our Steampunk DJ.

Take part in an unforgettable experience. Buy your tickets now! $150 per person, $1250 for a group of 10.

My compliments on the imagination they’ve put into organizing this event. Still, I am wondering about a few things. It would seem the only person over the age of 30 who’s expected to attend is the CBC host, Heather Hiscox. Also, the panel seems to be comprised of a set of furniture.. Are they planning something like those unconferences where attendees spontaneously volunteer to present. or in this case, to be a panelist?

If anyone who’s attending is inclined, please do leave comments after you’ve attended. I’d love to know how it all came together.

Virtual Reality (VR) becomes educational (at Case Western Reserve University and Miami Children’s Hospital)

I have two virtual reality news bits the most recent concerning Case Western Reserve University (CWRU; located in Cleveland, Ohio) and Microsoft’s HoloLens in an April 29, 2015 CWRU press release (also on EurekAlert), Note: Some of this academic press release reads like marketing collateral,

Case Western Reserve University Radiology Professor Mark Griswold knew his world had changed the moment he first used a prototype of Microsoft’s HoloLens headset. Two months later, one of the university’s medical students illustrated exactly why.

“There’s the aortic valve,” Satyam Ghodasara exclaimed as he used Microsoft’s device to examine a holographic heart. “Now I understand.”

Today, Griswold told tens of thousands of people how HoloLens can transform learning across countless subjects, including those as complex as the human body. Speaking to an in-person and online audience at Microsoft’s annual Build conference, he highlighted disciplines as disparate as art history and engineering–but started with a holographic heart. In traditional anatomy, after all, students like Ghodasara cut into cadavers to understand the body’s intricacies.

With HoloLens, Griswold explained, “you see it truly in 3D. You can take parts in and out. You can turn it around. You can see the blood pumping–the entire system.”

In other words, technology not only can match existing educational methods–it can actually improve upon them. Which, in many ways, is why Cleveland Clinic CEO Toby Cosgrove contacted then-Microsoft executive Craig Mundie in 2013, after the hospital and university first agreed to partner on a new education building.

“We launched this collaboration to prepare students for a health care future that is still being imagined,” Cleveland Clinic CEO Delos “Toby” Cosgrove said of what has become a 485,000-square-foot Health Education Campus project. “By combining a state-of-the-art structure, pioneering technology, and cutting-edge teaching techniques, we will provide them the innovative education required to lead in this new era.”

As Cosgrove, Case Western Reserve President Barbara R. Snyder and other academic leaders engaged more extensively with Microsoft, the more potential everyone saw.

“For more than a century, our medical school has been renowned for inventing and reinventing approaches to teaching and learning that take root nationwide,” President Snyder said. “When we match that expertise with the interdisciplinary knowledge of our faculty, we create a rich environment to explore the educational potential of Microsoft’s extraordinary technology.”

After a small group including Griswold, engineering professor Marc Buchner and Cleveland Clinic education technology leader Neil Mehta first experienced HoloLens in December, the faculty returned to Cleveland to create a core team dedicated to exploring the technology’s academic potential. In February, 10 members of the team–including Ghodasara–returned to Microsoft for a HoloLens programming deep dive.

Ghodasara already had taken the traditional anatomy class at Case Western Reserve, but it wasn’t until he used the HoloLens headset that he first visualized the aortic valve in its entirety–unobstructed by other elements of the cardiac system and undamaged by earlier dissection efforts. Members of the Microsoft team were in the room when Ghodasara had his “aha” moment; a few weeks later, the heart demonstration became part of the Build conference agenda.

Case Western Reserve is the only university represented during the three-day event, a distinction Griswold attributes in part to the core team’s breadth of expertise and collegial approach.

“Without all of those people coming together,” Griswold said, “this would not have happened.”

When Griswold took the stage as part of Microsoft’s opening keynote at the Build conference, Ghodasara, Buchner and Chief Information Officer Sue Workman also were in the audience. Back in Cleveland, three of Professor Buchner’s undergraduates–John Billingsley, Henry Eastman and Tim Sesler–demonstrated some of the potential of the HoloLens technology live in the Tinkham Veale University Center.

Buchner, whose specialties include simulation and game design, believes Microsoft’s innovation “has the capability to transform engineering education.”

Because the technology is relatively easy to use, students will be able to build, operate and analyze all manner of devices and systems. “[It will] encourage experimentation,” Buchner said, “leading to deeper understanding and improved product design.”

In truth, HoloLens ultimately could have applications for dozens of Case Western Reserve’s academic programs. NASA’s Jet Propulsion Laboratory already has worked with Microsoft to develop software that will allow Earth-based scientists to work on Mars with a specially designed rover vehicle. A similar collaboration could enable students here to take part in archeological digs around the world. Or astronomy students could stand in the midst of colliding galaxies, securing front-row view of the unfolding chaos. Art history professors could present masterpieces in their original settings–a centuries-old castle, or even the Sistine Chapel.

“The whole campus has the potential to use this,” Griswold said. “Our ability to use this for education is almost limitless.”

For now, however, the top priority is creating a full digital anatomy curriculum, a process launched with the advent of the Health Education Campus, and now experiencing even greater momentum. Among the key collaborators are a team of medical students and anatomy and radiology faculty who are already investigating the use of these kinds of technology. This team, led by Amy Wilson­Delfosse, the medical school’s associate dean for curriculum, and Suzanne Wish-Baratz, an assistant professor who is one of the primary leaders of anatomy education, fully expects to have a digital curriculum ready for the new Health Education Campus.

Also essential, Griswold said, has been the advice and assistance of Microsoft’s HoloLens team and executives.

“It’s been a joy to work with them. They have been so friendly, so collaborative, so willing to work with us on this,” Griswold said. “We’re going to do incredible things together.”

Ohio is not the only state where virtual reality is being incorporated into medical education.

Florida

From an April 30, 2015 Next Galaxy Corp. news release,

Incorporating eye gaze control, gestures, and voice commands while “walking around” inside an emergency medical experience, Next Galaxy’s Virtual Reality Model educates participants far beyond today’s methodology of passively watching video and taking written tests.

Next Galaxy Corp (OTC: NXGA) recently announced the signing of an agreement with Miami Children’s Hospital to use the Company’s VR Model to develop immersive Virtual Reality medical instructional content for patient and medical professional education. Per the multi-year agreement, Next Galaxy and Miami Children’s Hospital are jointly developing VR Instructionals on cardiopulmonary resuscitation (CPR) and other lifesaving procedures, which will be released as an application for smartphones.

Assessments are incorporated directly into the medical VR models, creating situations where participants are required to make the appropriate decisions about proper techniques. The Virtual CPR instructional will measure metrics and provide real-time feedback ensuring participants accurately perform CPR techniques. Further, the instructional will explain any mistake and prompt users to try again when errors are made. Supportive messages are delivered upon success.

The medical VR models will be viewable through smartphones and desktops as 3D, and via VR devices such as Google Cardboard, VRONE and Oculus Rift.

About Next Galaxy Corporation

Next Galaxy Corporation is a leading developer of innovative content solutions and fully Immersive Consumer Virtual Reality technology. The Company’s flagship consumer product in development is CEEK, a next-generation fully immersive entertainment and educational social virtual reality platform featuring a combination of live action and 3D experiences. Next Galaxy’s CEEK simulates the communal experience of attending events, such as concerts, sporting events, movies or conferences through Virtual Reality. Next Galaxy is developing entertainment and educational experiences for VR Cinema, VR Concerts, VR Sports, VR Business, VR Tourism and more. In short, Next Galaxy is building the meeting places of the future. For further information, visit www.nextgalaxycorp.com

This seems to be the second time this information has been distributed (March 11, 2015 news release on PRNewswire), a widely adopted practice. Consequently and thankfully, there’s a March 11, 2015 article by Celia Ampel for the South Florida Business Journal which provides more details about the technology and explaining how a smartphone fits into virtual reality,

The best way to learn CPR is an immersive experience, Miami Children’s Hospital leaders believe — not a video.

“If I’m watching a video, I can pause and count, but there’s no way to tell if I counted to six or seven,” Next Galaxy President Mary Spio said. “Because [the virtual reality application] is voice-activated, they’re going to be able to count out loud and self-assess whether they’re doing it correctly.”

Next Galaxy (Pink Sheets: NXGA)’s virtual reality technology uses a smartphone app. Users can put their smartphone into a virtual reality headset for an immersive experience, or see 3D content through the phone.

The application will be available to the public in the next few months, Spio said.

This deal and another with Miami-Dade Country Public Schools are transforming Next Galaxy Corp according to Ampel’s article,

The five-person company will be hiring about 20 full-time employees in the next six months, focusing on developers with 3D modeling and gaming experience, she said.

Quadrupling the size of your company in six months can be quite a challenge. I wish them good luck with their expansion and their virtual reality course materials.

As to what all this mixed-reality/virtual reality might look like, there’s this image from Case Western Reserve University,

Courtesy: Case Western Reserve University

Courtesy: Case Western Reserve University

Brain, brains, brains: a roundup

I’ve decided to do a roundup of the various brain-related projects I’ve been coming across in the last several months. I was inspired by this article (Real-life Jedi: Pushing the limits of mind control) by Katia Moskvitch,

You don’t have to be a Jedi to make things move with your mind.

Granted, we may not be able to lift a spaceship out of a swamp like Yoda does in The Empire Strikes Back, but it is possible to steer a model car, drive a wheelchair and control a robotic exoskeleton with just your thoughts.

We are standing in a testing room at IBM’s Emerging Technologies lab in Winchester, England.

On my head is a strange headset that looks like a black plastic squid. Its 14 tendrils, each capped with a moistened electrode, are supposed to detect specific brain signals.

In front of us is a computer screen, displaying an image of a floating cube.

As I think about pushing it, the cube responds by drifting into the distance.

Moskvitch goes on to discuss a number of projects that translate thought into movement via various pieces of equipment before she mentions a project at Brown University (US) where researchers are implanting computer chips into brains,

Headsets and helmets offer cheap, easy-to-use ways of tapping into the mind. But there are other,

Imagine some kind of a wireless computer device in your head that you’ll use for mind control – what if people hacked into that”

At Brown Institute for Brain Science in the US, scientists are busy inserting chips right into the human brain.

The technology, dubbed BrainGate, sends mental commands directly to a PC.

Subjects still have to be physically “plugged” into a computer via cables coming out of their heads, in a setup reminiscent of the film The Matrix. However, the team is now working on miniaturising the chips and making them wireless.

The researchers are recruiting for human clinical trials, from the BrainGate Clinical Trials webpage,

Clinical Trials – Now Recruiting

The purpose of the first phase of the pilot clinical study of the BrainGate2 Neural Interface System is to obtain preliminary device safety information and to demonstrate the feasibility of people with tetraplegia using the System to control a computer cursor and other assistive devices with their thoughts. Another goal of the study is to determine the participants’ ability to operate communication software, such as e-mail, simply by imagining the movement of their own hand. The study is invasive and requires surgery.

Individuals with limited or no ability to use both hands due to cervical spinal cord injury, brainstem stroke, muscular dystrophy, or amyotrophic lateral sclerosis (ALS) or other motor neuron diseases are being recruited into a clinical study at Massachusetts General Hospital (MGH) and Stanford University Medical Center. Clinical trial participants must live within a three-hour drive of Boston, MA or Palo Alto, CA. Clinical trial sites at other locations may be opened in the future. The study requires a commitment of 13 months.

They have been recruiting since at least November 2011, from the Nov. 14, 2011 news item by Tanya Lewis on MedicalXpress,

Stanford University researchers are enrolling participants in a pioneering study investigating the feasibility of people with paralysis using a technology that interfaces directly with the brain to control computer cursors, robotic arms and other assistive devices.

The pilot clinical trial, known as BrainGate2, is based on technology developed at Brown University and is led by researchers at Massachusetts General Hospital, Brown and the Providence Veterans Affairs Medical Center. The researchers have now invited the Stanford team to establish the only trial site outside of New England.

Under development since 2002, BrainGate is a combination of hardware and software that directly senses electrical signals in the brain that control movement. The device — a baby-aspirin-sized array of electrodes — is implanted in the cerebral cortex (the outer layer of the brain) and records its signals; computer algorithms then translate the signals into digital instructions that may allow people with paralysis to control external devices.

Confusingly, there seemto be two BrainGate organizations. One appears to be a research entity where a number of institutions collaborate and the other is some sort of jointly held company. From the About Us webpage of the BrainGate research entity,

In the late 1990s, the initial translation of fundamental neuroengineering research from “bench to bedside” – that is, to pilot clinical testing – would require a level of financial commitment ($10s of millions) available only from private sources. In 2002, a Brown University spin-off/startup medical device company, Cyberkinetics, Inc. (later, Cyberkinetics Neurotechnology Systems, Inc.) was formed to collect the regulatory permissions and financial resources required to launch pilot clinical trials of a first-generation neural interface system. The company’s efforts and substantial initial capital investment led to the translation of the preclinical research at Brown University to an initial human device, the BrainGate Neural Interface System [Caution: Investigational Device. Limited by Federal Law to Investigational Use]. The BrainGate system uses a brain-implantable sensor to detect neural signals that are then decoded to provide control signals for assistive technologies. In 2004, Cyberkinetics received from the U.S. Food and Drug Administration (FDA) the first of two Investigational Device Exemptions (IDEs) to perform this research. Hospitals in Rhode Island, Massachusetts, and Illinois were established as clinical sites for the pilot clinical trial run by Cyberkinetics. Four trial participants with tetraplegia (decreased ability to use the arms and legs) were enrolled in the study and further helped to develop the BrainGate device. Initial results from these trials have been published or presented, with additional publications in preparation.

While scientific progress towards the creation of this promising technology has been steady and encouraging, Cyberkinetics’ financial sponsorship of the BrainGate research – without which the research could not have been started – began to wane. In 2007, in response to business pressures and changes in the capital markets, Cyberkinetics turned its focus to other medical devices. Although Cyberkinetics’ own funds became unavailable for BrainGate research, the research continued through grants and subcontracts from federal sources. By early 2008 it became clear that Cyberkinetics would eventually need to withdraw completely from directing the pilot clinical trials of the BrainGate device. Also in 2008, Cyberkinetics spun off its device manufacturing to new ownership, BlackRock Microsystems, Inc., which now produces and is further developing research products as well as clinically-validated (510(k)-cleared) implantable neural recording devices.

Beginning in mid 2008, with the agreement of Cyberkinetics, a new, fully academically-based IDE application (for the “BrainGate2 Neural Interface System”) was developed to continue this important research. In May 2009, the FDA provided a new IDE for the BrainGate2 pilot clinical trial. [Caution: Investigational Device. Limited by Federal Law to Investigational Use.] The BrainGate2 pilot clinical trial is directed by faculty in the Department of Neurology at Massachusetts General Hospital, a teaching affiliate of Harvard Medical School; the research is performed in close scientific collaboration with Brown University’s Department of Neuroscience, School of Engineering, and Brown Institute for Brain Sciences, and the Rehabilitation Research and Development Service of the U.S. Department of Veteran’s Affairs at the Providence VA Medical Center. Additionally, in late 2011, Stanford University joined the BrainGate Research Team as a clinical site and is currently enrolling participants in the clinical trial. This interdisciplinary research team includes scientific partners from the Functional Electrical Stimulation Center at Case Western Reserve University and the Cleveland VA Medical Center. As was true of the decades of fundamental, preclinical research that provided the basis for the recent clinical studies, funding for BrainGate research is now entirely from federal and philanthropic sources.

The BrainGate Research Team at Brown University, Massachusetts General Hospital, Stanford University, and Providence VA Medical Center comprises physicians, scientists, and engineers working together to advance understanding of human brain function and to develop neurotechnologies for people with neurologic disease, injury, or limb loss.

I think they’re saying there was a reverse takeover of Cyberkinetics, from the BrainGate company About webpage,

The BrainGate™ Co. is a privately-held firm focused on the advancement of the BrainGate™ Neural Interface System.  The Company owns the Intellectual property of the BrainGate™ system as well as new technology being developed by the BrainGate company.  In addition, the Company also owns  the intellectual property of Cyberkinetics which it purchased in April 2009.

Meanwhile, in Europe there are two projects BrainAble and the Human Brain Project. The BrainAble project is similar to BrainGate in that it is intended for people with injuries but they seem to be concentrating on a helmet or cap for thought transmission (as per Moskovitch’s experience at the beginning of this posting). From the Feb. 28, 2012 news item on Science Daily,

In the 2009 film Surrogates, humans live vicariously through robots while safely remaining in their own homes. That sci-fi future is still a long way off, but recent advances in technology, supported by EU funding, are bringing this technology a step closer to reality in order to give disabled people more autonomy and independence than ever before.

“Our aim is to give people with motor disabilities as much autonomy as technology currently allows and in turn greatly improve their quality of life,” says Felip Miralles at Barcelona Digital Technology Centre, a Spanish ICT research centre.

Mr. Miralles is coordinating the BrainAble* project (http://www.brainable.org/), a three-year initiative supported by EUR 2.3 million in funding from the European Commission to develop and integrate a range of different technologies, services and applications into a commercial system for people with motor disabilities.

Here’s more from the BrainAble home page,

In terms of HCI [human-computer interface], BrainAble improves both direct and indirect interaction between the user and his smart home. Direct control is upgraded by creating tools that allow controlling inner and outer environments using a “hybrid” Brain Computer Interface (BNCI) system able to take into account other sources of information such as measures of boredom, confusion, frustration by means of the so-called physiological and affective sensors.

Furthermore, interaction is enhanced by means of Ambient Intelligence (AmI) focused on creating a proactive and context-aware environments by adding intelligence to the user’s surroundings. AmI’s main purpose is to aid and facilitate the user’s living conditions by creating proactive environments to provide assistance.

Human-Computer Interfaces are complemented by an intelligent Virtual Reality-based user interface with avatars and scenarios that will help the disabled move around freely, and interact with any sort of devices. Even more the VR will provide self-expression assets using music, pictures and text, communicate online and offline with other people, play games to counteract cognitive decline, and get trained in new functionalities and tasks.

Perhaps this video helps,

Another European project, NeuroCare, which I discussed in my March 5, 2012 posting, is focused on creating neural implants to replace damaged and/or destroyed sensory cells in the eye or the ear.

The Human Brain Project is, despite its title, a neuromorphic engineering project (although the researchers do mention some medical applications on the project’s home page)  in common with the work being done at the University of Michigan/HRL Labs mentioned in my April 19, 2012 posting (A step closer to artificial synapses courtesy of memritors) about that project. From the April 11, 2012 news item about the Human Brain Project on Science Daily,

Researchers at the EPFL [Ecole Polytechnique Fédérale de Lausanne] have discovered rules that relate the genes that a neuron switches on and off, to the shape of that neuron, its electrical properties and its location in the brain.

The discovery, using state-of-the-art informatics tools, increases the likelihood that it will be possible to predict much of the fundamental structure and function of the brain without having to measure every aspect of it. That in turn makes the Holy Grail of modelling the brain in silico — the goal of the proposed Human Brain Project — a more realistic, less Herculean, prospect. “It is the door that opens to a world of predictive biology,” says Henry Markram, the senior author on the study, which is published this week in PLoS ONE.

Here’s a bit more about the Human Brain Project (from the home page),

Today, simulating a single neuron requires the full power of a laptop computer. But the brain has billions of neurons and simulating all them simultaneously is a huge challenge. To get round this problem, the project will develop novel techniques of multi-level simulation in which only groups of neurons that are highly active are simulated in detail. But even in this way, simulating the complete human brain will require a computer a thousand times more powerful than the most powerful machine available today. This means that some of the key players in the Human Brain Project will be specialists in supercomputing. Their task: to work with industry to provide the project with the computing power it will need at each stage of its work.

The Human Brain Project will impact many different areas of society. Brain simulation will provide new insights into the basic causes of neurological diseases such as autism, depression, Parkinson’s, and Alzheimer’s. It will give us new ways of testing drugs and understanding the way they work. It will provide a test platform for new drugs that directly target the causes of disease and that have fewer side effects than current treatments. It will allow us to design prosthetic devices to help people with disabilities. The benefits are potentially huge. As world populations grow older, more than a third will be affected by some kind of brain disease. Brain simulation provides us with a powerful new strategy to tackle the problem.

The project also promises to become a source of new Information Technologies. Unlike the computers of today, the brain has the ability to repair itself, to take decisions, to learn, and to think creatively – all while consuming no more energy than an electric light bulb. The Human Brain Project will bring these capabilities to a new generation of neuromorphic computing devices, with circuitry directly derived from the circuitry of the brain. The new devices will help us to build a new generation of genuinely intelligent robots to help us at work and in our daily lives.

The Human Brain Project builds on the work of the Blue Brain Project. Led by Henry Markram of the Ecole Polytechnique Fédérale de Lausanne (EPFL), the Blue Brain Project has already taken an essential first towards simulation of the complete brain. Over the last six years, the project has developed a prototype facility with the tools, know-how and supercomputing technology necessary to build brain models, potentially of any species at any stage in its development. As a proof of concept, the project has successfully built the first ever, detailed model of the neocortical column, one of the brain’s basic building blocks.

The Human Brain Project is a flagship project  in contention for the 1B Euro research prize that I’ve mentioned in the context of the GRAPHENE-CA flagship project (my Feb. 13, 2012 posting gives a better description of these flagship projects while mentioned both GRAPHENE-CA and another brain-computer interface project, PRESENCCIA).

Part of the reason for doing this roundup, is the opportunity to look at a number of these projects in one posting; the effect is more overwhelming than I expected.

For anyone who’s interested in Markram’s paper (open access),

Georges Khazen, Sean L. Hill, Felix Schürmann, Henry Markram. Combinatorial Expression Rules of Ion Channel Genes in Juvenile Rat (Rattus norvegicus) Neocortical Neurons. PLoS ONE, 2012; 7 (4): e34786 DOI: 10.1371/journal.pone.0034786

I do have earlier postings on brains and neuroprostheses, one of the more recent ones is this March 16, 2012 posting. Meanwhile, there are  new announcements from Northwestern University (US) and the US National Institutes of Health (National Institute of Neurological Disorders and Stroke). From the April 18, 2012 news item (originating from the National Institutes of Health) on Science Daily,

An artificial connection between the brain and muscles can restore complex hand movements in monkeys following paralysis, according to a study funded by the National Institutes of Health.

In a report in the journal Nature, researchers describe how they combined two pieces of technology to create a neuroprosthesis — a device that replaces lost or impaired nervous system function. One piece is a multi-electrode array implanted directly into the brain which serves as a brain-computer interface (BCI). The array allows researchers to detect the activity of about 100 brain cells and decipher the signals that generate arm and hand movements. The second piece is a functional electrical stimulation (FES) device that delivers electrical current to the paralyzed muscles, causing them to contract. The brain array activates the FES device directly, bypassing the spinal cord to allow intentional, brain-controlled muscle contractions and restore movement.

From the April 19, 2012 news item (originating from Northwestern University) on Science Daily,

A new Northwestern Medicine brain-machine technology delivers messages from the brain directly to the muscles — bypassing the spinal cord — to enable voluntary and complex movement of a paralyzed hand. The device could eventually be tested on, and perhaps aid, paralyzed patients.

The research was done in monkeys, whose electrical brain and muscle signals were recorded by implanted electrodes when they grasped a ball, lifted it and released it into a small tube. Those recordings allowed the researchers to develop an algorithm or “decoder” that enabled them to process the brain signals and predict the patterns of muscle activity when the monkeys wanted to move the ball.

These experiments were performed by Christian Ethier, a post-doctoral fellow, and Emily Oby, a graduate student in neuroscience, both at the Feinberg School of Medicine. The researchers gave the monkeys a local anesthetic to block nerve activity at the elbow, causing temporary, painless paralysis of the hand. With the help of the special devices in the brain and the arm — together called a neuroprosthesis — the monkeys’ brain signals were used to control tiny electric currents delivered in less than 40 milliseconds to their muscles, causing them to contract, and allowing the monkeys to pick up the ball and complete the task nearly as well as they did before.

“The monkey won’t use his hand perfectly, but there is a process of motor learning that we think is very similar to the process you go through when you learn to use a new computer mouse or a different tennis racquet. Things are different and you learn to adjust to them,” said Miller [Lee E. Miller], also a professor of physiology and of physical medicine and rehabilitation at Feinberg and a Sensory Motor Performance Program lab chief at the Rehabilitation Institute of Chicago.

The National Institutes of Health news item supplies a little history and background for this latest breakthrough while the Northwestern University news item offers more technical details more technical details.

You can find the researchers’ paper with this citation (assuming you can get past the paywall,

C. Ethier, E. R. Oby, M. J. Bauman, L. E. Miller. Restoration of grasp following paralysis through brain-controlled stimulation of muscles. Nature, 2012; DOI: 10.1038/nature10987

I was surprised to find the Health Research Fund of Québec listed as one of the funders but perhaps Christian Ethier has some connection with the province.

Less confused about Europe’s FET (Future and Emerging Technologies programme)

I’ve had problems trying figure out the European Union’s Future and Emerging Technologies programme and so I’m glad to say that the Feb. 10, 2012 news item on Nanowerk offers to clear up a few matters for me (and presumably a few other people too).

From the news item,

Go forth and explore the frontiers of science and technology! This is the unspoken motto of the Future and Emerging Technologies programme (FET), which has for more than 20 years been funding and inspiring researchers across Europe to lay new foundations for information and communication technology (ICT). [emphasis mine]

The vanguard researchers of frontier ICT research don’t always come from IT backgrounds or follow the traditional academic career path. The European Commission’s FET programme encourages unconventional match-ups like chemistry and IT, physics and optics, biology and data engineering. Researchers funded by FET are driven by ideas and a sense of purpose which push the boundaries of science and technology.

They have three funding programmes (from the news item),

To address these challenges, the FET scheme supports long-term ICT programmes under three banners:

  • FET-Open, which has simple and fast mechanisms in place to receive new ideas for projects without pre-conceived boundaries or deadlines;
  • FET-Proactive, which spearheads ‘transformative’ research and supports community-building around a number of fundamental long-term ICT challenges; and
  • FET Flagships, which cut across national and European programmes to unite top research teams pursuing ambitious, large-scale, science-driven research with a visionary goal.

The news item goes on to describe a number of projects including the GRAPHENE-CA flagship pilot currently under consideration, along with five other flagship projects, for one of two 1 Billion Euro prizes. I have commented before (my Feb. 6, 2012 posting) on the communication strategies being employed by at least some of the members of this particular flagship project. Amazingly, they’ve done it again; theirs is the only flagship pilot project mentioned.

You can see the original article on the European Union website here where they have described other projects including this one, PRESENCCIA,

‘Light switches, TV remote controls and even house keys could become a thing of the past thanks to brain-computer interface (BCI) technology being developed in Europe that lets users perform everyday tasks with thoughts alone.’ So begins a story on ICT Results about a pioneering EU-funded FET project called Presenccia*.

Primary applications of BCI are in gaming/virtual reality (VR), home entertainment and domestic care, but the project partners also see their work helping the medical profession. ‘A virtual environment could be used to train a disabled person to control an electric wheelchair through a BCI,’ explained Mel Slater, the project coordinator. ‘It is much safer for them to learn in VR than in the real world, where mistakes could have physical consequences.’

So, PRESENCCIA is a project whereby people will be trained to use a BCI in virtual reality before attempting it in real life. I wish there was a bit more information about this BCI technology that is being developed in Europe as I am deeply fascinated and horrified by this notion of thought waves that ‘turn light switches on and off’ or possibly allow you to make a phone call as Professor Mark Welland at Cambridge University was speculating in 2010 (mentioned in my April 30, 2010 posting [scroll 1/2 way down]). Welland did mention that you would need some sort of brain implant to achieve a phone call with your thought waves, which is the aspect that makes me most uncomfortable.