A new software system developed by Brown University [US] researchers turns cell phones into augmented reality portals, enabling users to place virtual building blocks, furniture and other objects into real-world backdrops, and use their hands to manipulate those objects as if they were really there.
The developers hope the new system, called Portal-ble, could be a tool for artists, designers, game developers and others to experiment with augmented reality (AR). The team will present the work later this month at the ACM Symposium on User Interface Software and Technology (UIST 2019) in New Orleans. The source code for Andriod is freely available for download on the researchers’ website, and iPhone code will follow soon.
“AR is going to be a great new mode of interaction,” said Jeff Huang, an assistant professor of computer science at Brown who developed the system with his students. “We wanted to make something that made AR portable so that people could use anywhere without any bulky headsets. We also wanted people to be able to interact with the virtual world in a natural way using their hands.”
Huang said the idea for Portal-ble’s “hands-on” interaction grew out of some frustration with AR apps like Pokemon GO. AR apps use smartphones to place virtual objects (like Pokemon characters) into real-world scenes, but interacting with those objects requires users to swipe on the screen.
“Swiping just wasn’t a satisfying way of interacting,” Huang said. “In the real world, we interact with objects with our hands. We turn doorknobs, pick things up and throw things. So we thought manipulating virtual objects by hand would be much more powerful than swiping. That’s what’s different about Portal-ble.”
The platform makes use of a small infrared sensor mounted on the back of a phone. The sensor tracks the position of people’s hands in relation to virtual objects, enabling users to pick objects up, turn them, stack them or drop them. It also lets people use their hands to virtually “paint” onto real-world backdrops. As a demonstration, Huang and his students used the system to paint a virtual garden into a green space on Brown’s College Hill campus.
Huang says the main technical contribution of the work was developing the right accommodations and feedback tools to enable people to interact intuitively with virtual objects.
“It turns out that picking up a virtual object is really hard if you try to apply real-world physics,” Huang said. “People try to grab in the wrong place, or they put their fingers through the objects. So we had to observe how people tried to interact with these objects and then make our system able accommodate those tendencies.”
To do that, Huang enlisted students in a class he was teaching to come up with tasks they might want to do in the AR world — stacking a set of blocks, for example. The students then asked other people to try performing those tasks using Portal-ble, while recording what people were able to do and what they couldn’t. They could then adjust the system’s physics and user interface to make interactions more successful.
“It’s a little like what happens when people draw lines in Photoshop,” Huang said. “The lines people draw are never perfect, but the program can smooth them out and make them perfectly straight. Those were the kinds of accommodations we were trying to make with these virtual objects.”
The team also added sensory feedback — visual highlights on objects and phone vibrations — to make interactions easier. Huang said he was somewhat surprised that phone vibrations helped users to interact. Users feel the vibrations in the hand they’re using to hold the phone, not in the hand that’s actually grabbing for the virtual object. Still, Huang said, vibration feedback still helped users to more successfully interact with objects.
In follow-up studies, users reported that the accommodations and feedback used by the system made tasks significantly easier, less time-consuming and more satisfying.
Huang and his students plan to continue working with Portal-ble — expanding its object library, refining interactions and developing new activities. They also hope to streamline the system to make it run entirely on a phone. Currently the infrared sensor requires an infrared sensor and external compute stick for extra processing power.
Huang hopes people will download the freely available source code and try it for themselves. “We really just want to put this out there and see what people do with it,” he said. “The code is on our website for people to download, edit and build off of. It will be interesting to see what people do with it.
Co-authors on the research paper were Jing Qian, Jiaju Ma, Xiangyu Li, Benjamin Attal, Haoming Lai, James Tompkin and John Hughes. The work was supported by the National Science Foundation (IIS-1552663) and by a gift from Pixar.
This is the first time I’ve seen an augmented reality system that seems accessible, i.e., affordable. You can find out more on the Portal-ble ‘resource’ page where you’ll also find a link to the source code repository. The researchers, as noted in the news release, have an Android version available now with an iPhone version to be released in the future.
When you think of robotics, you likely think of something rigid, heavy, and built for a specific purpose. New “Robotic Skins” technology developed by Yale researchers flips that notion on its head, allowing users to animate the inanimate and turn everyday objects into robots.
The skins are made from elastic sheets embedded with sensors and actuators developed in Kramer-Bottiglio’s lab. Placed on a deformable object — a stuffed animal or a foam tube, for instance — the skins animate these objects from their surfaces. The makeshift robots can perform different tasks depending on the properties of the soft objects and how the skins are applied.
“We can take the skins and wrap them around one object to perform a task — locomotion, for example — and then take them off and put them on a different object to perform a different task, such as grasping and moving an object,” she said. “We can then take those same skins off that object and put them on a shirt to make an active wearable device.”
Robots are typically built with a single purpose in mind. The robotic skins, however, allow users to create multi-functional robots on the fly. That means they can be used in settings that hadn’t even been considered when they were designed, said Kramer-Bottiglio.
Additionally, using more than one skin at a time allows for more complex movements. For instance, Kramer-Bottiglio said, you can layer the skins to get different types of motion. “Now we can get combined modes of actuation — for example, simultaneous compression and bending.”
To demonstrate the robotic skins in action, the researchers created a handful of prototypes. These include foam cylinders that move like an inchworm, a shirt-like wearable device designed to correct poor posture, and a device with a gripper that can grasp and move objects.
Kramer-Bottiglio said she came up with the idea for the devices a few years ago when NASA [US National Aeronautics and Space Administration] put out a call for soft robotic systems. The technology was designed in partnership with NASA, and its multifunctional and reusable nature would allow astronauts to accomplish an array of tasks with the same reconfigurable material. The same skins used to make a robotic arm out of a piece of foam could be removed and applied to create a soft Mars rover that can roll over rough terrain. With the robotic skins on board, the Yale scientist said, anything from balloons to balls of crumpled paper could potentially be made into a robot with a purpose.
“One of the main things I considered was the importance of multifunctionality, especially for deep space exploration where the environment is unpredictable,” she said. “The question is: How do you prepare for the unknown unknowns?”
For the same line of research, Kramer-Bottiglio was recently awarded a $2 million grant from the National Science Foundation, as part of its Emerging Frontiers in Research and Innovation program.
Next, she said, the lab will work on streamlining the devices and explore the possibility of 3D printing the components.
Just in case the link to the paper becomes obsolete, here’s a citation for the paper,
One of my earliest posts featuring memristors (May 9, 2008) focused on their potential for energy savings but since then most of my postings feature research into their application in the field of neuromorphic (brainlike) computing. (For a description and abbreviated history of the memristor go to this page on my Nanotech Mysteries Wiki.)
A new way of arranging advanced computer components called memristors on a chip could enable them to be used for general computing, which could cut energy consumption by a factor of 100.
This would improve performance in low power environments such as smartphones or make for more efficient supercomputers, says a University of Michigan researcher.
“Historically, the semiconductor industry has improved performance by making devices faster. But although the processors and memories are very fast, they can’t be efficient because they have to wait for data to come in and out,” said Wei Lu, U-M professor of electrical and computer engineering and co-founder of memristor startup Crossbar Inc.
Memristors might be the answer. Named as a portmanteau of memory and resistor, they can be programmed to have different resistance states–meaning they store information as resistance levels. These circuit elements enable memory and processing in the same device, cutting out the data transfer bottleneck experienced by conventional computers in which the memory is separate from the processor.
… unlike ordinary bits, which are 1 or 0, memristors can have resistances that are on a continuum. Some applications, such as computing that mimics the brain (neuromorphic), take advantage of the analog nature of memristors. But for ordinary computing, trying to differentiate among small variations in the current passing through a memristor device is not precise enough for numerical calculations.
Lu and his colleagues got around this problem by digitizing the current outputs—defining current ranges as specific bit values (i.e., 0 or 1). The team was also able to map large mathematical problems into smaller blocks within the array, improving the efficiency and flexibility of the system.
Computers with these new blocks, which the researchers call “memory-processing units,” could be particularly useful for implementing machine learning and artificial intelligence algorithms. They are also well suited to tasks that are based on matrix operations, such as simulations used for weather prediction. The simplest mathematical matrices, akin to tables with rows and columns of numbers, can map directly onto the grid of memristors.
The memristor array situated on a circuit board. Credit: Mohammed Zidan, Nanoelectronics group, University of Michigan.
Once the memristors are set to represent the numbers, operations that multiply and sum the rows and columns can be taken care of simultaneously, with a set of voltage pulses along the rows. The current measured at the end of each column contains the answers. A typical processor, in contrast, would have to read the value from each cell of the matrix, perform multiplication, and then sum up each column in series.
“We get the multiplication and addition in one step. It’s taken care of through physical laws. We don’t need to manually multiply and sum in a processor,” Lu said.
His team chose to solve partial differential equations as a test for a 32×32 memristor array—which Lu imagines as just one block of a future system. These equations, including those behind weather forecasting, underpin many problems science and engineering but are very challenging to solve. The difficulty comes from the complicated forms and multiple variables needed to model physical phenomena.
When solving partial differential equations exactly is impossible, solving them approximately can require supercomputers. These problems often involve very large matrices of data, so the memory-processor communication bottleneck is neatly solved with a memristor array. The equations Lu’s team used in their demonstration simulated a plasma reactor, such as those used for integrated circuit fabrication.
This work is described in a study, “A general memristor-based partial differential equation solver,” published in the journal Nature Electronics.
It was supported by the Defense Advanced Research Projects Agency (DARPA) (grant no. HR0011-17-2-0018) and by the National Science Foundation (NSF) (grant no. CCF-1617315).
This injectable bandage could be a gamechanger (as they say) if it can be taken beyond the ‘in vitro’ (i.e., petri dish) testing stage. A May 22, 2018 news item on Nanowerk makes the announcement (Note: A link has been removed),
While several products are available to quickly seal surface wounds, rapidly stopping fatal internal bleeding has proven more difficult. Now researchers from the Department of Biomedical Engineering at Texas A&M University are developing an injectable hydrogel bandage that could save lives in emergencies such as penetrating shrapnel wounds on the battlefield (Acta Biomaterialia, “Nanoengineered injectable hydrogels for wound healing application”).
The researchers combined a hydrogel base (a water-swollen polymer) and nanoparticles that interact with the body’s natural blood-clotting mechanism. “The hydrogel expands to rapidly fill puncture wounds and stop blood loss,” explained Akhilesh Gaharwar, Ph.D., assistant professor and senior investigator on the work. “The surface of the nanoparticles attracts blood platelets that become activated and start the natural clotting cascade of the body.”
Enhanced clotting when the nanoparticles were added to the hydrogel was confirmed by standard laboratory blood clotting tests. Clotting time was reduced from eight minutes to six minutes when the hydrogel was introduced into the mixture. When nanoparticles were added, clotting time was significantly reduced, to less than three minutes.
In addition to the rapid clotting mechanism of the hydrogel composite, the engineers took advantage of special properties of the nanoparticle component. They found they could use the electric charge of the nanoparticles to add growth factors that efficiently adhered to the particles. “Stopping fatal bleeding rapidly was the goal of our work,” said Gaharwar. “However, we found that we could attach growth factors to the nanoparticles. This was an added bonus because the growth factors act to begin the body’s natural wound healing process—the next step needed after bleeding has stopped.”
The researchers were able to attach vascular endothelial growth factor (VEGF) to the nanoparticles. They tested the hydrogel/nanoparticle/VEGF combination in a cell culture test that mimics the wound healing process. The test uses a petri dish with a layer of endothelial cells on the surface that create a solid skin-like sheet. The sheet is then scratched down the center creating a rip or hole in the sheet that resembles a wound.
When the hydrogel containing VEGF bound to the nanoparticles was added to the damaged endothelial cell wound, the cells were induced to grow back and fill-in the scratched region—essentially mimicking the healing of a wound.
“Our laboratory experiments have verified the effectiveness of the hydrogel for initiating both blood clotting and wound healing,” said Gaharwar. “We are anxious to begin tests in animals with the hope of testing and eventual use in humans where we believe our formulation has great potential to have a significant impact on saving lives in critical situations.”
The work was funded by grant EB023454 from the National Institute of Biomedical Imaging and Bioengineering (NIBIB), and the National Science Foundation. The results were reported in the February issue of the journal Acta Biomaterialia.
A penetrating injury from shrapnel is a serious obstacle in overcoming battlefield wounds that can ultimately lead to death.Given the high mortality rates due to hemorrhaging, there is an unmet need to quickly self-administer materials that prevent fatality due to excessive blood loss.
With a gelling agent commonly used in preparing pastries, researchers from the Inspired Nanomaterials and Tissue Engineering Laboratory have successfully fabricated an injectable bandage to stop bleeding and promote wound healing.
In a recent article “Nanoengineered Injectable Hydrogels for Wound Healing Application” published in Acta Biomaterialia, Dr. Akhilesh K. Gaharwar, assistant professor in the Department of Biomedical Engineering at Texas A&M University, uses kappa-carrageenan and nanosilicates to form injectable hydrogels to promote hemostasis (the process to stop bleeding) and facilitate wound healing via a controlled release of therapeutics.
“Injectable hydrogels are promising materials for achieving hemostasis in case of internal injuries and bleeding, as these biomaterials can be introduced into a wound site using minimally invasive approaches,” said Gaharwar. “An ideal injectable bandage should solidify after injection in the wound area and promote a natural clotting cascade. In addition, the injectable bandage should initiate wound healing response after achieving hemostasis.”
The study uses a commonly used thickening agent known as kappa-carrageenan, obtained from seaweed, to design injectable hydrogels. Hydrogels are a 3-D water swollen polymer network, similar to Jell-O, simulating the structure of human tissues.
When kappa-carrageenan is mixed with clay-based nanoparticles, injectable gelatin is obtained. The charged characteristics of clay-based nanoparticles provide hemostatic ability to the hydrogels. Specifically, plasma protein and platelets form blood adsorption on the gel surface and trigger a blood clotting cascade.
“Interestingly, we also found that these injectable bandages can show a prolonged release of therapeutics that can be used to heal the wound” said Giriraj Lokhande, a graduate student in Gaharwar’s lab and first author of the paper. “The negative surface charge of nanoparticles enabled electrostatic interactions with therapeutics thus resulting in the slow release of therapeutics.”
Nanoparticles that promote blood clotting and wound healing (red discs), attached to the wound-filling hydrogel component (black) form a nanocomposite hydrogel. The gel is designed to be self-administered to stop bleeding and begin wound-healing in emergency situations. Credit: Lokhande, et al. 1
It’s been an interesting week for hydrogels. On May 21, 2018 there was a news item on ScienceDaily about a bioengineered hydrogel which stimulated brain tissue growth after a stroke (mouse model),
In a first-of-its-kind finding, a new stroke-healing gel helped regrow neurons and blood vessels in mice with stroke-damaged brains, UCLA researchers report in the May 21 issue of Nature Materials.
“We tested this in laboratory mice to determine if it would repair the brain in a model of stroke, and lead to recovery,” said Dr. S. Thomas Carmichael, Professor and Chair of neurology at UCLA. “This study indicated that new brain tissue can be regenerated in what was previously just an inactive brain scar after stroke.”
The brain has a limited capacity for recovery after stroke and other diseases. Unlike some other organs in the body, such as the liver or skin, the brain does not regenerate new connections, blood vessels or new tissue structures. Tissue that dies in the brain from stroke is absorbed, leaving a cavity, devoid of blood vessels, neurons or axons, the thin nerve fibers that project from neurons.
After 16 weeks, stroke cavities in mice contained regenerated brain tissue, including new neural networks — a result that had not been seen before. The mice with new neurons showed improved motor behavior, though the exact mechanism wasn’t clear.
While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.
For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),
Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.
Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research. The recent paper acceptance rate for SIGGRAPH has been less than 26%. The submitted papers are peer-reviewed in a single-blind process. There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress. …
This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014. The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,
While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.
“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”
SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”
That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.
CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.
All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.
“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”
Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.
The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.
The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”
The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.
Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.
About ACM, ACM SIGGRAPH, and SIGGRAPH 2018
ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.
They have provided an image illustrating what they mean (I don’t find it especially informative),
Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn
Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.
Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.
“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”
For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.
SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.
“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”
This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”
Apparently this is a still from the ‘short’,
Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios
Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.
Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.
“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”
To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.
Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec
to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.
The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)
Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.
Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.
“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”
I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,
Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck
Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.
“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”
The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.
“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”
Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.
Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.
“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.
The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.
In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.
And, even in its current state, the results are worth the wait.
“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”
Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.
Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.
Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,
The researchers have also provided this image,
By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)
It does seem like we’re synthesizing the world around us, eh?
SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.
The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.
Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”
He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”
Highlights from the 2018 Art Gallery include:
Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver
TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.
Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara
Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”
Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University
Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.
In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.
The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.
To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.
“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.
Art Papers highlights include:
Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth
This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.
Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong
The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.
Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University
“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.
What’s the what?
My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.
Notanee Bourassa knew that what he was seeing in the night sky was not normal. Bourassa, an IT technician in Regina, Canada, trekked outside of his home on July 25, 2016, around midnight with his two younger children to show them a beautiful moving light display in the sky — an aurora borealis. He often sky gazes until the early hours of the morning to photograph the aurora with his Nikon camera, but this was his first expedition with his children. When a thin purple ribbon of light appeared and starting glowing, Bourassa immediately snapped pictures until the light particles disappeared 20 minutes later. Having watched the northern lights for almost 30 years since he was a teenager, he knew this wasn’t an aurora. It was something else.
From 2015 to 2016, citizen scientists — people like Bourassa who are excited about a science field but don’t necessarily have a formal educational background — shared 30 reports of these mysterious lights in online forums and with a team of scientists that run a project called Aurorasaurus. The citizen science project, funded by NASA and the National Science Foundation, tracks the aurora borealis through user-submitted reports and tweets.
The Aurorasaurus team, led by Liz MacDonald, a space scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, conferred to determine the identity of this mysterious phenomenon. MacDonald and her colleague Eric Donovan at the University of Calgary in Canada talked with the main contributors of these images, amateur photographers in a Facebook group called Alberta Aurora Chasers, which included Bourassa and lead administrator Chris Ratzlaff. Ratzlaff gave the phenomenon a fun, new name, Steve, and it stuck.
But people still didn’t know what it was.
Scientists’ understanding of Steve changed that night Bourassa snapped his pictures. Bourassa wasn’t the only one observing Steve. Ground-based cameras called all-sky cameras, run by the University of Calgary and University of California, Berkeley, took pictures of large areas of the sky and captured Steve and the auroral display far to the north. From space, ESA’s (the European Space Agency) Swarm satellite just happened to be passing over the exact area at the same time and documented Steve.
For the first time, scientists had ground and satellite views of Steve. Scientists have now learned, despite its ordinary name, that Steve may be an extraordinary puzzle piece in painting a better picture of how Earth’s magnetic fields function and interact with charged particles in space. The findings are published in a study released today in Science Advances.
“This is a light display that we can observe over thousands of kilometers from the ground,” said MacDonald. “It corresponds to something happening way out in space. Gathering more data points on STEVE will help us understand more about its behavior and its influence on space weather.”
The study highlights one key quality of Steve: Steve is not a normal aurora. Auroras occur globally in an oval shape, last hours and appear primarily in greens, blues and reds. Citizen science reports showed Steve is purple with a green picket fence structure that waves. It is a line with a beginning and end. People have observed Steve for 20 minutes to 1 hour before it disappears.
If anything, auroras and Steve are different flavors of an ice cream, said MacDonald. They are both created in generally the same way: Charged particles from the Sun interact with Earth’s magnetic field lines.
The uniqueness of Steve is in the details. While Steve goes through the same large-scale creation process as an aurora, it travels along different magnetic field lines than the aurora. All-sky cameras showed that Steve appears at much lower latitudes. That means the charged particles that create Steve connect to magnetic field lines that are closer to Earth’s equator, hence why Steve is often seen in southern Canada.
Perhaps the biggest surprise about Steve appeared in the satellite data. The data showed that Steve comprises a fast moving stream of extremely hot particles called a sub auroral ion drift, or SAID. Scientists have studied SAIDs since the 1970s but never knew there was an accompanying visual effect. The Swarm satellite recorded information on the charged particles’ speeds and temperatures, but does not have an imager aboard.
“People have studied a lot of SAIDs, but we never knew it had a visible light. Now our cameras are sensitive enough to pick it up and people’s eyes and intellect were critical in noticing its importance,” said Donovan, a co-author of the study. Donovan led the all-sky camera network and his Calgary colleagues lead the electric field instruments on the Swarm satellite.
Steve is an important discovery because of its location in the sub auroral zone, an area of lower latitude than where most auroras appear that is not well researched. For one, with this discovery, scientists now know there are unknown chemical processes taking place in the sub auroral zone that can lead to this light emission.
Second, Steve consistently appears in the presence of auroras, which usually occur at a higher latitude area called the auroral zone. That means there is something happening in near-Earth space that leads to both an aurora and Steve. Steve might be the only visual clue that exists to show a chemical or physical connection between the higher latitude auroral zone and lower latitude sub auroral zone, said MacDonald.
“Steve can help us understand how the chemical and physical processes in Earth’s upper atmosphere can sometimes have local noticeable effects in lower parts of Earth’s atmosphere,” said MacDonald. “This provides good insight on how Earth’s system works as a whole.”
The team can learn a lot about Steve with additional ground and satellite reports, but recording Steve from the ground and space simultaneously is a rare occurrence. Each Swarm satellite orbits Earth every 90 minutes and Steve only lasts up to an hour in a specific area. If the satellite misses Steve as it circles Earth, Steve will probably be gone by the time that same satellite crosses the spot again.
In the end, capturing Steve becomes a game of perseverance and probability.
“It is my hope that with our timely reporting of sightings, researchers can study the data so we can together unravel the mystery of Steve’s origin, creation, physics and sporadic nature,” said Bourassa. “This is exciting because the more I learn about it, the more questions I have.”
As for the name “Steve” given by the citizen scientists? The team is keeping it as an homage to its initial name and discoverers. But now it is STEVE, short for Strong Thermal Emission Velocity Enhancement.
Other collaborators on this work are: the University of Calgary, New Mexico Consortium, Boston University, Lancaster University, Athabasca University, Los Alamos National Laboratory and the Alberta Aurora Chasers Facebook group.
If you live in an area where you may see STEVE or an aurora, submit your pictures and reports to Aurorasaurus through aurorasaurus.org or the free iOS and Android mobile apps. To learn how to spot STEVE, click here.
There is a video with MacDonald describing the work and featuring more images,
Citizen scientists first began posting about Steve on social media several years ago. Across New Zealand, Canada, the United States, and the United Kingdom, they reported an unusual sight in the night sky: a purplish line that arced across the heavens for about an hour at a time, visible at lower latitudes than classical aurorae, mostly in the spring and fall. … “It’s similar to a contrail but doesn’t disperse,” says Notanee Bourassa, an aurora photographer in Saskatchewan province in Canada [Regina as mentioned in the news release is the capital of the province of Saskatchewan].
Traditional aurorae are often green, because oxygen atoms present in Earth’s atmosphere emit that color light when they’re bombarded by charged particles trapped in Earth’s magnetic field. They also appear as a diffuse glow—rather than a distinct line—on the northern or southern horizon. Without a scientific theory to explain the new sight, a group of citizen scientists led by aurora enthusiast Chris Ratzlaff of Canada’s Alberta province [usually referred to as Canada’s province of Alberta or simply, the province of Alberta] playfully dubbed it Steve, after a line in the 2006 children’s movie Over the Hedge.
Aurorae have been studied for decades, but people may have missed Steve because their cameras weren’t sensitive enough, says Elizabeth MacDonald, a space physicist at NASA Goddard Space Flight Center in Greenbelt, Maryland, and leader of the new research. MacDonald and her team have used data from a European satellite called Swarm-A to study Steve in its native environment, about 200 kilometers up in the atmosphere. Swarm-A’s instruments revealed that the charged particles in Steve had a temperature of about 6000°C, “impressively hot” compared with the nearby atmosphere, MacDonald says. And those ions were flowing from east to west at nearly 6 kilometers per second, …
This paper is open access. You’ll note that Notanee Bourassa is listed as an author. For more about Bourassa, there’s his Twitter feed (@DJHardwired) and his YouTube Channel. BTW, his Twitter bio notes that he’s “Recently heartbroken,” as well as, “Seasoned human male. Expert storm chaser, aurora photographer, drone flyer and on-air FM radio DJ.” Make of that what you will.
It would be nice if they had some video of people navigating with the help of this ‘smart’ paint. Perhaps one day. Meanwhile, Adele Peters in her March 7, 2018 article for Fast Company provides a vivid description of how a sight-impaired or blind person could navigate more safely and easily,
The crosswalk on a road in front of the Ohio State School for the Blind looks like one that might be found at any intersection. But the white stripes at the edges are made with “smart paint”–and if a student who is visually impaired crosses while using a cane with a new smart tip, the cane will vibrate when it touches the lines.
The paint uses rare-earth nanocrystals that can emit a unique light signature, which a sensor added to the tip of a cane can activate and then read. “If you pulse a laser or LED into these materials, they’ll pulse back at you at a very specific frequency,” says Josh Collins, chief technology officer at Intelligent Materials [sic], the company that manufacturers the oxides that can be added to paint.
While digging down for more information, this February 12, 2018 article by Ben Levine for Government Technology Magazine was unearthed (Note: Links have been removed),
In this installment of the Innovation of the Month series (read last month’s story here), we explore the use of smart technologies to help blind and visually impaired people better navigate the world around them. A team at Ohio State University has been working on a “smart paint” application to do just that.
MetroLab’s Executive Director Ben Levine sat down with John Lannutti, professor of materials science engineering at Ohio State University; Mary Ball-Swartwout, orientation and mobility specialist at the Ohio State School for the Blind; and Josh Collins, chief technology officer at Intelligent Material to learn more.
John Lannutti (OSU): The goal of “smart paint for networked smart cities” is to assist people who are blind and visually impaired by implementing a “smart paint” technology that provides accurate location services. You might think, “Can’t GPS do that?” But, surprisingly, current GPS-based solutions actually cannot tell whether somebody is walking on the sidewalk or down the middle of the street. Meanwhile, modern urban intersections are becoming increasingly complex. That means that finding a crosswalk, aligning to cross and maintaining a consistent crossing direction while in motion can be challenging for people who are visually impaired.
And of course, crosswalks aren’t the only challenge. For example, our current mapping technologies are unable to provide the exact location of a building’s entrance. We have a technology solution to those challenges. Smart paint is created by adding exotic light-converting oxides to standard road paints. The paint is detected using a “smart cane,” a modified white cane that detects the smart paint and enables portal-to-portal guidance. The smart cane can also be used to notify vehicles — including autonomous vehicles — of a user’s presence in a crosswalk.
As part of this project, we have a whole team of educational, city and industrial partners, including:
Ohio State School for the Blind — testing and implementation of smart paint technology in Columbus involving both students and adults
Western Michigan University — implementation of smart paint technology with travelers who are blind and visually impaired to maximize orientation and mobility
Mississippi State University — the impacts of smart paint technology on mobility and employment for people who are blind and visually impaired
Columbus Smart Cities Initiative — rollout of smart paint within Columbus and the paint’s interaction with the Integrated Data Exchange (IDE), a cloud-based platform that dynamically collects user data to show technological impact
The city of Tampa, Fla. — rollout of smart paint at the Lighthouse for the Blind
The Hillsborough Area Transit Regional Authority, Hillsborough County, Fla. — integration of smart paint with existing bus lines to enable precise location determination
The American Council of the Blind — implementation of smart paint with the annual American Council of the Blind convention
MetroLab Network — smart paint implementation in city-university partnerships
Intelligent Material — manufactures and supplies the unique light-converting oxides that make the paint “smart”
Crown Technology — paint manufacturing, product evaluation and technical support
SRI International — design and manufacturing of the “smart” white cane hardware
Levine: Can you describe what this project focused on and what motivated you to address this particular challenge?
Lannutti: We have been working with Intelligent Material in integrating light-converting oxides into polymeric matrices for specific applications for several years. Intelligent Material supplies these oxides for highly specialized applications across a variety of industries, and has deep experience in filtering and processing the resulting optical outputs. They were already looking at using this technology for automotive applications when the idea to develop applications for people who are blind was introduced. We were extremely fortunate to have the Ohio State School for the Blind (OSSB) right here in Columbus and even more fortunate to have interested collaborators there who have helped us at every step of the way. They even have a room filled with previous white cane technologies; we used those to better understand what works and what doesn’t, helping refine our own product. At about this same time, the National Science Foundation released a call for Smart and Connected Communities proposals, which gave us both a goal and a “home” for this idea.
Levine: How will the tools developed in this project impact planning and the built environment?
Ball-Swartwout: One of the great things about smart paint is that it can be added to the built environment easily at little extra cost. We expect that once smart paint is widely adopted, most sighted users will not notice much difference as smart paint is not visually different from regular road paint. Some intersections might need to have more paint features that enable smart white cane-guided entry from the sidewalk into the crosswalk. Paint that tells users that they have reached their destination may become visible as horizontal stripes along modern sidewalks. These paints could be either gray or black or even invisible to sighted pedestrians, but would still be detectable by “smart” white canes to tell users that they have arrived at their destination.
Levine: Can you tell us about the new technologies that are associated with this project? Can you talk about the status quo versus your vision for the future?
Collins: Beyond converting ceramics in paint, placing a highly sensitive excitation source and detector package at the tip of a moving white cane is truly novel. Also challenging is powering this package using minimal battery weight to decrease the likelihood of wrist and upper neck fatigue.
The status quo is that the travel of citizens who are blind and visually impaired can be unpredictable. They need better technologies for routine travel and especially for travel to any new destinations. In addition, we anticipate that this technology could assist in the travel of people who have a variety of physical and cognitive impairments.
Our vision for the future of this technology is that it will be widespread and utilized constantly. Outside the U.S., Japan and Europe have integrated relatively expensive technologies into streets and sidewalks, and we see smart paint replacing that very quickly. Because the “pain” of installing smart paint is very small, we believe that grass-roots pressure will enable rapid introduction of this technology.
Levine: What was the most surprising thing you learned during this process?
Lannutti: In my mind, the most surprising thing was discovering that sound was not necessarily the best means of guiding users who are blind. This is a bias on the part of sighted individuals as we are used to beeping and buzzing noises that guide or inform us throughout our day. Pedestrians who are blind, on the other hand, need to constantly listen to aspects of their environment to successfully navigate it. For example, listening to traffic noise is extremely important to them as a means of avoiding danger. People who are blind or visually impaired cannot see but need to hear their environment. So we had to dial back our expectations regarding the utility of sound. Instead, we now focus on vibration along the white cane as a means of alerting the user.
If those interested, Levine’s article is well worth reading in its entirety.
Intelligent Material Solutions, Inc. is a privately held business headquartered in Princeton, NJ in the SRI/Sarnoff Campus, formerly RCA Labs. Our technology can be traced through scientific discoveries dating back over 50 years. We are dedicated to solving the worlds’ most challenging problems and in doing so have assembled an innovative, multi-discipliary team of leading scientists from industry and academia to ensure rapid transition from our labs to the world.
The video was published on December 6, 2017. You can find even more details at the company’s LinkedIn page.
Yet another advance toward ‘brainlike’ computing (how many times have I written this or a variation thereof in the last 10 years? See: Dexter Johnson’s take on the situation at the end of this post): Northwestern University announced their latest memristor research in a February 21, 2018 news item on Nanowerk,
Computer algorithms might be performing brain-like functions, such as facial recognition and language translation, but the computers themselves have yet to operate like brains.
“Computers have separate processing and memory storage units, whereas the brain uses neurons to perform both functions,” said Northwestern University’s Mark C. Hersam. “Neural networks can achieve complicated computation with significantly lower energy consumption compared to a digital computer.”
In recent years, researchers have searched for ways to make computers more neuromorphic, or brain-like, in order to perform increasingly complicated tasks with high efficiency. Now Hersam, a Walter P. Murphy Professor of Materials Science and Engineering in Northwestern’s McCormick School of Engineering, and his team are bringing the world closer to realizing this goal.
The research team has developed a novel device called a “memtransistor,” which operates much like a neuron by performing both memory and information processing. With combined characteristics of a memristor and transistor, the memtransistor also encompasses multiple terminals that operate more similarly to a neural network.
Supported by the National Institute of Standards and Technology and the National Science Foundation, the research was published online today, February 22 , in Nature. Vinod K. Sangwan and Hong-Sub Lee, postdoctoral fellows advised by Hersam, served as the paper’s co-first authors.
The memtransistor builds upon work published in 2015, in which Hersam, Sangwan, and their collaborators used single-layer molybdenum disulfide (MoS2) to create a three-terminal, gate-tunable memristor for fast, reliable digital memory storage. Memristor, which is short for “memory resistors,” are resistors in a current that “remember” the voltage previously applied to them. Typical memristors are two-terminal electronic devices, which can only control one voltage channel. By transforming it into a three-terminal device, Hersam paved the way for memristors to be used in more complex electronic circuits and systems, such as neuromorphic computing.
To develop the memtransistor, Hersam’s team again used atomically thin MoS2 with well-defined grain boundaries, which influence the flow of current. Similar to the way fibers are arranged in wood, atoms are arranged into ordered domains – called “grains” – within a material. When a large voltage is applied, the grain boundaries facilitate atomic motion, causing a change in resistance.
“Because molybdenum disulfide is atomically thin, it is easily influenced by applied electric fields,” Hersam explained. “This property allows us to make a transistor. The memristor characteristics come from the fact that the defects in the material are relatively mobile, especially in the presence of grain boundaries.”
But unlike his previous memristor, which used individual, small flakes of MoS2, Hersam’s memtransistor makes use of a continuous film of polycrystalline MoS2 that comprises a large number of smaller flakes. This enabled the research team to scale up the device from one flake to many devices across an entire wafer.
“When length of the device is larger than the individual grain size, you are guaranteed to have grain boundaries in every device across the wafer,” Hersam said. “Thus, we see reproducible, gate-tunable memristive responses across large arrays of devices.”
After fabricating memtransistors uniformly across an entire wafer, Hersam’s team added additional electrical contacts. Typical transistors and Hersam’s previously developed memristor each have three terminals. In their new paper, however, the team realized a seven-terminal device, in which one terminal controls the current among the other six terminals.
“This is even more similar to neurons in the brain,” Hersam said, “because in the brain, we don’t usually have one neuron connected to only one other neuron. Instead, one neuron is connected to multiple other neurons to form a network. Our device structure allows multiple contacts, which is similar to the multiple synapses in neurons.”
Next, Hersam and his team are working to make the memtransistor faster and smaller. Hersam also plans to continue scaling up the device for manufacturing purposes.
“We believe that the memtransistor can be a foundational circuit element for new forms of neuromorphic computing,” he said. “However, making dozens of devices, as we have done in our paper, is different than making a billion, which is done with conventional transistor technology today. Thus far, we do not see any fundamental barriers that will prevent further scale up of our approach.”
The researchers have made this illustration available,
Caption: This is the memtransistor symbol overlaid on an artistic rendering of a hypothetical circuit layout in the shape of a brain. Credit; Hersam Research Group
From a Feb. 23, 2018 posting by Dexter Johnson on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),
While this all seems promising, one of the big shortcomings in neuromorphic computing has been that it doesn’t mimic the brain in a very important way. In the brain, for every neuron there are a thousand synapses—the electrical signal sent between the neurons of the brain. This poses a problem because a transistor only has a single terminal, hardly an accommodating architecture for multiplying signals.
Now researchers at Northwestern University, led by Mark Hersam, have developed a new device that combines memristors—two-terminal non-volatile memory devices based on resistance switching—with transistors to create what Hersam and his colleagues have dubbed a “memtransistor” that performs both memory storage and information processing.
This most recent research builds on work that Hersam and his team conducted back in 2015 in which the researchers developed a three-terminal, gate-tunable memristor that operated like a kind of synapse.
While this work was recognized as mimicking the low-power computing of the human brain, critics didn’t really believe that it was acting like a neuron since it could only transmit a signal from one artificial neuron to another. This was far short of a human brain that is capable of making tens of thousands of such connections.
“Traditional memristors are two-terminal devices, whereas our memtransistors combine the non-volatility of a two-terminal memristor with the gate-tunability of a three-terminal transistor,” said Hersam to IEEE Spectrum. “Our device design accommodates additional terminals, which mimic the multiple synapses in neurons.”
Hersam believes that these unique attributes of these multi-terminal memtransistors are likely to present a range of new opportunities for non-volatile memory and neuromorphic computing.
If you have the time and the interest, Dexter’s post provides more context,
The Cajal exhibit of drawings was here in Vancouver (Canada) this last fall (2017) and I still carry the memory of that glorious experience (see my Sept. 11, 2017 posting for more about the show and associated events). It seems Cajal’s drawings had a similar response in New York city, from a January 18, 2018 article by Roberta Smith for the New York Times,
It’s not often that you look at an exhibition with the help of the very apparatus that is its subject. But so it is with “The Beautiful Brain: The Drawings of Santiago Ramón y Cajal” at the Grey Art Gallery at New York University, one of the most unusual, ravishing exhibitions of the season.
Explorations led by local and Spanish scientists, artists, and entrepreneurs who will share their unique perspectives on particular aspects of the exhibition. (2:00 pm on select Tuesdays and Saturdays)
Tue, May 8 – Mark Harnett, Fred and Carole Middleton Career Development Professor at MIT and McGovern Institute Investigator Sat, May 26 – Marion Boulicault, MIT Graduate Student and Neuroethics Fellow in the Center for Sensorimotor Neural Engineering Tue, June 5 – Kelsey Allen, Graduate researcher, MIT Center for Brains, Minds, and Machines Sat, Jun 23 – Francisco Martin-Martinez, Research Scientist in MIT’s Laboratory for Atomistic & Molecular Mechanics and President of the Spanish Foundation for Science and Technology Jul 21 – Alex Gomez-Marin, Principal Investigator of the Behavior of Organisms Laboratory in the Instituto de Neurociencias, Spain Tue, Jul 31– Julie Pryor, Director of Communications at the McGovern Institute for Brain Research at MIT Tue, Aug 28 – Satrajit Ghosh, Principal Research Scientist at the McGovern Institute for Brain Research at MIT, Assistant Professor in the Department of Otolaryngology at Harvard Medical School, and faculty member in the Speech and Hearing Biosciences and Technology program in the Harvard Division of Medical Sciences
Drop in and explore expansion microscopy in our maker-space.
Drawing of the cells of the chick cerebellum by Santiago Ramón y Cajal, from “Estructura de los centros nerviosos de las aves,” Madrid, circa 1905
Modern neuroscience, for all its complexity, can trace its roots directly to a series of pen-and-paper sketches rendered by Nobel laureate Santiago Ramón y Cajal in the late 19th and early 20th centuries.
His observations and drawings exposed the previously hidden composition of the brain, revealing neuronal cell bodies and delicate projections that connect individual neurons together into intricate networks.
As he explored the nervous systems of various organisms under his microscope, a natural question arose: What makes a human brain different from the brain of any other species?
At least part of the answer, Ramón y Cajal hypothesized, lay in a specific class of neuron—one found in a dazzling variety of shapes and patterns of connectivity, and present in higher proportions in the human brain than in the brains of other species. He dubbed them the “butterflies of the soul.”
Known as interneurons, these cells play critical roles in transmitting information between sensory and motor neurons, and, when defective, have been linked to diseases such as schizophrenia, autism and intellectual disability.
Despite more than a century of study, however, it remains unclear why interneurons are so diverse and what specific functions the different subtypes carry out.
Now, in a study published in the March 22  issue of Nature, researchers from Harvard Medical School, New York Genome Center, New York University and the Broad Institute of MIT and Harvard have detailed for the first time how interneurons emerge and diversify in the brain.
Using single-cell analysis—a technology that allows scientists to track cellular behavior one cell at a time—the team traced the lineage of interneurons from their earliest precursor states to their mature forms in mice. The researchers identified key genetic programs that determine the fate of developing interneurons, as well as when these programs are switched on or off.
The findings serve as a guide for efforts to shed light on interneuron function and may help inform new treatment strategies for disorders involving their dysfunction, the authors said.
“We knew more than 100 years ago that this huge diversity of morphologically interesting cells existed in the brain, but their specific individual roles in brain function are still largely unclear,” said co-senior author Gordon Fishell, HMS professor of neurobiology and a faculty member at the Stanley Center for Psychiatric Research at the Broad.
“Our study provides a road map for understanding how and when distinct interneuron subtypes develop, giving us unprecedented insight into the biology of these cells,” he said. “We can now investigate interneuron properties as they emerge, unlock how these important cells function and perhaps even intervene when they fail to develop correctly in neuropsychiatric disease.”
A hippocampal interneuron. Image: Biosciences Imaging Gp, Soton, Wellcome Trust via Creative Commons
Origins and Fates
In collaboration with co-senior author Rahul Satija, core faculty member of the New York Genome Center, Fishell and colleagues analyzed brain regions in developing mice known to contain precursor cells that give rise to interneurons.
Using Drop-seq, a single-cell sequencing technique created by researchers at HMS and the Broad, the team profiled gene expression in thousands of individual cells at multiple time points.
This approach overcomes a major limitation in past research, which could analyze only the average activity of mixtures of many different cells.
In the current study, the team found that the precursor state of all interneurons had similar gene expression patterns despite originating in three separate brain regions and giving rise to 14 or more interneuron subtypes alone—a number still under debate as researchers learn more about these cells.
“Mature interneuron subtypes exhibit incredible diversity. Their morphology and patterns of connectivity and activity are so different from each other, but our results show that the first steps in their maturation are remarkably similar,” said Satija, who is also an assistant professor of biology at New York University.
“They share a common developmental trajectory at the earliest stages, but the seeds of what will cause them to diverge later—a handful of genes—are present from the beginning,” Satija said.
As they profiled cells at later stages in development, the team observed the initial emergence of four interneuron “cardinal” classes, which give rise to distinct fates. Cells were committed to these fates even in the early embryo. By developing a novel computational strategy to link precursors with adult subtypes, the researchers identified individual genes that were switched on and off when cells began to diversify.
For example, they found that the gene Mef2c—mutations of which are linked to Alzheimer’s disease, schizophrenia and neurodevelopmental disorders in humans—is an early embryonic marker for a specific interneuron subtype known as Pvalb neurons. When they deleted Mef2c in animal models, Pvalb neurons failed to develop.
These early genes likely orchestrate the execution of subsequent genetic subroutines, such as ones that guide interneuron subtypes as they migrate to different locations in the brain and ones that help form unique connection patterns with other neural cell types, the authors said.
The identification of these genes and their temporal activity now provide researchers with specific targets to investigate the precise functions of interneurons, as well as how neurons diversify in general, according to the authors.
“One of the goals of this project was to address an incredibly fascinating developmental biology question, which is how individual progenitor cells decide between different neuronal fates,” Satija said. “In addition to these early markers of interneuron divergence, we found numerous additional genes that increase in expression, many dramatically, at later time points.”
The association of some of these genes with neuropsychiatric diseases promises to provide a better understanding of these disorders and the development of therapeutic strategies to treat them, a particularly important notion given the paucity of new treatments, the authors said.
Over the past 50 years, there have been no fundamentally new classes of neuropsychiatric drugs, only newer versions of old drugs, the researchers pointed out.
“Our repertoire is no better than it was in the 1970s,” Fishell said.
“Neuropsychiatric diseases likely reflect the dysfunction of very specific cell types. Our study puts forward a clear picture of what cells to look at as we work to shed light on the mechanisms that underlie these disorders,” Fishell said. “What we will find remains to be seen, but we have new, strong hypotheses that we can now test.”
As a resource for the research community, the study data and software are open-source and freely accessible online.
A gallery of the drawings of Santiago Ramón y Cajal is currently on display in New York City, and will open at the MIT Museum in Boston in May 2018.
Christian Mayer, Christoph Hafemeister and Rachel Bandler served as co-lead authors on the study.
This work was supported by the National Institutes of Health (R01 NS074972, R01 NS081297, MH071679-12, DP2-HG-009623, F30MH114462, T32GM007308, F31NS103398), the European Molecular Biology Organization, the National Science Foundation and the Simons Foundation.
Here’s link to and a citation for the paper,
Developmental diversification of cortical inhibitory interneurons by Christian Mayer, Christoph Hafemeister, Rachel C. Bandler, Robert Machold, Renata Batista Brito, Xavier Jaglin, Kathryn Allaway, Andrew Butler, Gord Fishell, & Rahul Satija. Nature volume 555, pages 457–462 (22 March 2018) doi:10.1038/nature25999 Published: 05 March 2018
I first stumbled onto memristors and the possibility of brain-like computing sometime in 2008 (around the time that R. Stanley Williams and his team at HP Labs first published the results of their research linking Dr. Leon Chua’s memristor theory to their attempts to shrink computer chips). In the almost 10 years since, scientists have worked hard to utilize memristors in the field of neuromorphic (brain-like) engineering/computing.
When it comes to processing power, the human brain just can’t be beat.
Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses—the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.
Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.
In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.
Now engineers at MIT [Massachusetts Institute of Technology] have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.
The design, published today [January 22, 2018] in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.
The research was led by Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, and a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories. His co-authors are Shinhyun Choi (first author), Scott Tan (co-first author), Zefan Li, Yunjo Kim, Chanyeol Choi, and Hanwool Yeon of MIT, along with Pai-Yu Chen and Shimeng Yu of Arizona State University.
Too many paths
Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.
But it’s been difficult to control the flow of ions in existing designs. Kim says that’s because most switching mediums, made of amorphous materials, have unlimited possible paths through which ions can travel — a bit like Pachinko, a mechanical arcade game that funnels small steel balls down through a series of pins and levers, which act to either divert or direct the balls out of the machine.
Like Pachinko, existing switching mediums contain multiple paths that make it difficult to predict where ions will make it through. Kim says that can create unwanted nonuniformity in a synapse’s performance.
“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”
A perfect mismatch
Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.
To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium — a material also used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow.
The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.
They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.
“This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.
As a final test, Kim’s team explored how its device would perform if it were to carry out actual learning tasks — specifically, recognizing samples of handwriting, which researchers consider to be a first practical test for neuromorphic chips. Such chips would consist of “input/hidden/output neurons,” each connected to other “neurons” via filament-based artificial synapses.
Scientists believe such stacks of neural nets can be made to “learn.” For instance, when fed an input that is a handwritten ‘1,’ with an output that labels it as ‘1,’ certain output neurons will be activated by input neurons and weights from an artificial synapse. When more examples of handwritten ‘1s’ are fed into the same chip, the same output neurons may be activated when they sense similar features between different samples of the same letter, thus “learning” in a fashion similar to what the brain does.
Kim and his colleagues ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, the properties of which they based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from a handwritten recognition dataset commonly used by neuromorphic designers, and found that their neural network hardware recognized handwritten samples 95 percent of the time, compared to the 97 percent accuracy of existing software algorithms.
The team is in the process of fabricating a working neuromorphic chip that can carry out handwriting-recognition tasks, not in simulation but in reality. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that currently are only possible with large supercomputers.
“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial hardware.”
This research was supported in part by the National Science Foundation.