Tag Archives: City University of Hong Kong

Electrotactile rendering device virtualizes the sense of touch

I stumbled across this November 15, 2022 news item on Nanowerk highlighting work on the sense of touch in the virual originally announced in October 2022,

A collaborative research team co-led by City University of Hong Kong (CityU) has developed a wearable tactile rendering system, which can mimic the sensation of touch with high spatial resolution and a rapid response rate. The team demonstrated its application potential in a braille display, adding the sense of touch in the metaverse for functions such as virtual reality shopping and gaming, and potentially facilitating the work of astronauts, deep-sea divers and others who need to wear thick gloves.

Here’s what you’ll need to wear for this virtual tactile experience,

Caption: The new wearable tactile rendering system can mimic touch sensations with high spatial resolution and a rapid response rate. Credit: Robotics X Lab and City University of Hong Kong

An October 20, 2022 City University of Hong Kong (CityU) press release (also on EurekAlert), which originated the news item, delves further into the research,

“We can hear and see our families over a long distance via phones and cameras, but we still cannot feel or hug them. We are physically isolated by space and time, especially during this long-lasting pandemic,” said Dr Yang Zhengbao,Associate Professor in the Department of Mechanical Engineering of CityU, who co-led the study. “Although there has been great progress in developing sensors that digitally capture tactile features with high resolution and high sensitivity, we still lack a system that can effectively virtualize the sense of touch that can record and playback tactile sensations over space and time.”

In collaboration with Chinese tech giant Tencent’s Robotics X Laboratory, the team developed a novel electrotactile rendering system for displaying various tactile sensations with high spatial resolution and a rapid response rate. Their findings were published in the scientific journal Science Advances under the title “Super-resolution Wearable Electro-tactile Rendering System”.

Limitations in existing techniques

Existing techniques to reproduce tactile stimuli can be broadly classified into two categories: mechanical and electrical stimulation. By applying a localised mechanical force or vibration on the skin, mechanical actuators can elicit stable and continuous tactile sensations. However, they tend to be bulky, limiting the spatial resolution when integrated into a portable or wearable device. Electrotactile stimulators, in contrast, which evoke touch sensations in the skin at the location of the electrode by passing a local electric current though the skin, can be light and flexible while offering higher resolution and a faster response. But most of them rely on high voltage direct-current (DC) pulses (up to hundreds of volts) to penetrate the stratum corneum, the outermost layer of the skin, to stimulate the receptors and nerves, which poses a safety concern. Also, the tactile rendering resolution needed to be improved.

The latest electro-tactile actuator developed by the team is very thin and flexible and can be easily integrated into a finger cot. This fingertip wearable device can display different tactile sensations, such as pressure, vibration, and texture roughness in high fidelity. Instead of using DC pulses, the team developed a high-frequency alternating stimulation strategy and succeeded in lowering the operating voltage under 30 V, ensuring the tactile rendering is safe and comfortable.

They also proposed a novel super-resolution strategy that can render tactile sensation at locations between physical electrodes, instead of only at the electrode locations. This increases the spatial resolution of their stimulators by more than three times (from 25 to 105 points), so the user can feel more realistic tactile perception.

Tactile stimuli with high spatial resolution

“Our new system can elicit tactile stimuli with both high spatial resolution (76 dots/cm2), similar to the density of related receptors in the human skin, and a rapid response rate (4 kHz),” said Mr Lin Weikang, a PhD student at CityU, who made and tested the device.

The team ran different tests to show various application possibilities of this new wearable electrotactile rendering system. For example, they proposed a new Braille strategy that is much easier for people with a visual impairment to learn.

The proposed strategy breaks down the alphabet and numerical digits into individual strokes and order in the same way they are written. By wearing the new electrotactile rendering system on a fingertip, the user can recognise the alphabet presented by feeling the direction and the sequence of the strokes with the fingertip sensor. “This would be particularly useful for people who lose their eye sight later in life, allowing them to continue to read and write using the same alphabetic system they are used to, without the need to learn the whole Braille dot system,” said Dr Yang.

Enabling touch in the metaverse

Second, the new system is well suited for VR/AR [virtual reality/augmented reality] applications and games, adding the sense of touch to the metaverse. The electrodes can be made highly flexible and scalable to cover larger areas, such as the palm. The team demonstrated that a user can virtually sense the texture of clothes in a virtual fashion shop. The user also experiences an itchy sensation in the fingertips when being licked by a VR cat. When stroking a virtual cat’s fur, the user can feel a variance in the roughness as the strokes change direction and speed.

The system can also be useful in transmitting fine tactile details through thick gloves. The team successfully integrated the thin, light electrodes of the electrotactile rendering system into flexible tactile sensors on a safety glove. The tactile sensor array captures the pressure distribution on the exterior of the glove and relays the information to the user in real time through tactile stimulation. In the experiment, the user could quickly and accurately locate a tiny steel washer just 1 mm in radius and 0.44mm thick based on the tactile feedback from the glove with sensors and stimulators. This shows the system’s potential in enabling high-fidelity tactile perception, which is currently unavailable to astronauts, firefighters, deep-sea divers and others who need wear thick protective suits or gloves.

“We expect our technology to benefit a broad spectrum of applications, such as information transmission, surgical training, teleoperation, and multimedia entertainment,” added Dr Yang.

Here’s a link to and a citation for the paper,

Super-resolution wearable electrotactile rendering system by Weikang Lin, Dongsheng Zhang, Wang Wei Lee, Xuelong Li, Ying Hong, Qiqi Pan, Ruirui Zhang, Guoxiang Peng, Hong Z. Tan, Zhengyou Zhang, Lei Wei, and Zhengbao Yang. Science Advances 9 Sep 2022 Vol 8, Issue 36 DOI: 10.1126/sciadv.abp8738

This paper is open access.

City University of Hong Kong (CityU) and its anti-bacterial graphene face masks

This looks like interesting work and I think the integration of visual images and embedded video in the news release (on the university website) is particularly well done. I won’t be including all the graphical information here as my focus is the text.

A Sept. 10, 2020 City University of Hong Kong (CityU) press release (also on EurekAlert) announces a greener, more effective face mask,

Face masks have become an important tool in fighting against the COVID-19 pandemic. However, improper use or disposal of masks may lead to “secondary transmission”. A research team from City University of Hong Kong (CityU) has successfully produced graphene masks with an anti-bacterial efficiency of 80%, which can be enhanced to almost 100% with exposure to sunlight for around 10 minutes. Initial tests also showed very promising results in the deactivation of two species of coronaviruses. The graphene masks are easily produced at low cost, and can help to resolve the problems of sourcing raw materials and disposing of non-biodegradable masks.

The research is conducted by Dr Ye Ruquan, Assistant Professor from CityU’s Department of Chemistry, in collaboration with other researchers. The findings were published in the scientific journal ACS Nano, titled “Self-Reporting and Photothermally Enhanced Rapid Bacterial Killing on a Laser-Induced Graphene Mask“.

Commonly used surgical masks are not anti-bacterial. This may lead to the risk of secondary transmission of bacterial infection when people touch the contaminated surfaces of the used masks or discard them improperly. Moreover, the melt-blown fabrics used as a bacterial filter poses an impact on the environment as they are difficult to decompose. Therefore, scientists have been looking for alternative materials to make masks.

Converting other materials into graphene by laser

Dr Ye has been studying the use of laser-induced graphene [emphasis mine] in developing sustainable energy. When he was studying PhD degree at Rice University several years ago, the research team he participated in and led by his supervisor discovered an easy way to produce graphene. They found that direct writing on carbon-containing polyimide films (a polymeric plastic material with high thermal stability) using a commercial CO2 infrared laser system can generate 3D porous graphene. The laser changes the structure of the raw material and hence generates graphene. That’s why it is named laser-induced graphene.

Graphene is known for its anti-bacterial properties, so as early as last September, before the outbreak of COVID-19, producing outperforming masks with laser-induced graphene already came across Dr Ye’s mind. He then kick-started the study in collaboration with researchers from the Hong Kong University of Science and Technology (HKUST), Nankai University, and other organisations.

Excellent anti-bacterial efficiency

The research team tested their laser-induced graphene with E. coli, and it achieved high anti-bacterial efficiency of about 82%. In comparison, the anti-bacterial efficiency of activated carbon fibre and melt-blown fabrics, both commonly-used materials in masks, were only 2% and 9% respectively. Experiment results also showed that over 90% of the E. coli deposited on them remained alive even after 8 hours, while most of the E. coli deposited on the graphene surface were dead after 8 hours. Moreover, the laser-induced graphene showed a superior anti-bacterial capacity for aerosolised bacteria.

Dr Ye said that more research on the exact mechanism of graphene’s bacteria-killing property is needed. But he believed it might be related to the damage of bacterial cell membranes by graphene’s sharp edge. And the bacteria may be killed by dehydration induced by the hydrophobic (water-repelling) property of graphene.

Previous studies suggested that COVID-19 would lose its infectivity at high temperatures. So the team carried out experiments to test if the graphene’s photothermal effect (producing heat after absorbing light) can enhance the anti-bacterial effect. The results showed that the anti-bacterial efficiency of the graphene material could be improved to 99.998% within 10 minutes under sunlight, while activated carbon fibre and melt-blown fabrics only showed an efficiency of 67% and 85% respectively.

The team is currently working with laboratories in mainland China to test the graphene material with two species of human coronaviruses. Initial tests showed that it inactivated over 90% of the virus in five minutes and almost 100% in 10 minutes under sunlight. The team plans to conduct testings with the COVID-19 virus later.

Their next step is to further enhance the anti-virus efficiency and develop a reusable strategy for the mask. They hope to release it to the market shortly after designing an optimal structure for the mask and obtaining the certifications.

Dr Ye described the production of laser-induced graphene as a “green technique”. All carbon-containing materials, such as cellulose or paper, can be converted into graphene using this technique. And the conversion can be carried out under ambient conditions without using chemicals other than the raw materials, nor causing pollution. And the energy consumption is low.

“Laser-induced graphene masks are reusable. If biomaterials are used for producing graphene, it can help to resolve the problem of sourcing raw material for masks. And it can lessen the environmental impact caused by the non-biodegradable disposable masks,” he added.

Dr Ye pointed out that producing laser-induced graphene is easy. Within just one and a half minutes, an area of 100 cm² can be converted into graphene as the outer or inner layer of the mask. Depending on the raw materials for producing the graphene, the price of the laser-induced graphene mask is expected to be between that of surgical mask and N95 mask. He added that by adjusting laser power, the size of the pores of the graphene material can be modified so that the breathability would be similar to surgical masks.

A new way to check the condition of the mask

To facilitate users to check whether graphene masks are still in good condition after being used for a period of time, the team fabricated a hygroelectric generator. It is powered by electricity generated from the moisture in human breath. By measuring the change in the moisture-induced voltage when the user breathes through a graphene mask, it provides an indicator of the condition of the mask. Experiment results showed that the more the bacteria and atmospheric particles accumulated on the surface of the mask, the lower the voltage resulted. “The standard of how frequently a mask should be changed is better to be decided by the professionals. Yet, this method we used may serve as a reference,” suggested Dr Ye.

Laser-induced graphene (LIG), Rice University, and Dr. Ye were mentioned here in a May 9, 2018 titled: Do you want that coffee with some graphene on toast?

Back to the latest research, read the caption carefully,

Research shows that over 90% of the E. coli deposited on activated carbon fibre (fig c and d) and melt-blown fabrics (fig e and f) remained alive even after 8 hours. In contrast, most of the E. coli deposited on the graphene surface (fig a and b) were dead. (Photo source: DOI number: 10.1021/acsnano.0c05330)

Here’s a link to and a citation for the paper,

Self-Reporting and Photothermally Enhanced Rapid Bacterial Killing on a Laser-Induced Graphene Mask by Libei Huang, Siyu Xu, Zhaoyu Wang, Ke Xue, Jianjun Su, Yun Song, Sijie Chen, Chunlei Zhu, Ben Zhong Tang, and Ruquan Ye. ACS Nano 2020, 14, 9, 12045–12053 DOI: https://doi.org/10.1021/acsnano.0c05330 Publication Date:August 11, 2020 Copyright © 2020 American Chemical Society

This paper is behind a paywall.

Nanowires with fast infrared light (IR) response and more

An April 10, 2019 news item on Nanowerk points the way to improved high-speed communication with nanowires (Note: A link has been removed),

Chinese scientists have synthesized new nanowires with high carrier mobility and fast infrared light (IR) response, which could help in high-speed communication. Their findings were published in Nature Communications (“Ultra-fast photodetectors based on high-mobility indium gallium antimonide nanowires”).

Below, you will find an image illustrating the researchers’ work ,

Caption: The growth mechanism and fast 1550 nm IR detection of the single-crystalline In0.28Ga0.72Sb ternary nanowires Credit: HAN Ning

An April 10, 2019 Chinese Academy of Sciences news release (also on EurekAlert), which originated the news item, provides more detail,

Nowadays, effective optical communications use 1550 nm IR, which is received and converted into an electrical signal for computer processing. Fast light-to-electrical conversion is thus essential for high-speed communications.

According to quantum theory, 1550 nm IR has energy of ~ 0.8 eV, and can only be detected by semiconductors with bandgaps lower than 0.8 eV, such as germanium (0.66 eV) and III-V compound materials such as InxGa1-xAs (0.35-1.42 eV) and InxGa1-xSb (0.17-0.73 eV). However, those materials usually have huge crystal defects, which cause substantial degradation of photoresponse performance.

Scientists from the Institute of Process Engineering (IPE) of the Chinese Academy of Sciences, City University of Hong Kong (CityU) and their collaborators synthesized highly crystalline ternary In0.28Ga0.72Sb nanowires to demonstrate high carrier mobility and fast IR response.

In this study, the In0.28Ga0.72Sb nanowires (bandgap 0.69 eV) showed a high responsivity of 6000 A/W to IR with high response and decay times of 0.038ms and 0.053ms, respectively, which are some of the best times so far. The fast IR response speed can be attributed to the minimized crystal defects, as also illustrated by a high hole mobility of up to 200 cm2/Vs, according to Prof. Johnny C. Ho from CityU.

The minimized crystal defect is achieved by a “catalyst epitaxy technology” first established by Ho’s group. Briefly, the III-V compound nanowires are catalytically grown by a metal catalyst such as gold, nickel, etc.

“These catalyst nanoparticles play a key role in nanowire growth as the nanowires are synthesized layer by layer with the atoms well aligned with those in the catalyst,” said HAN Ning, a professor at IPE and senior author of the paper.

Here’s a link to and a citation for the paper,

Ultra-fast photodetectors based on high-mobility indium gallium antimonide nanowires by Dapan Li, Changyong Lan, Arumugam Manikandan, SenPo Yip, Ziyao Zhou, Xiaoguang Liang, Lei Shu, Yu-Lun Chueh, Ning Han & Johnny C. Ho. Nature Communicationsvolume 10, Article number: 1664 (2019) DOI: https://doi.org/10.1038/s41467-019-09606-y Published 10 April 2019

This paper is open access.

SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) and their art gallery from Aug. 12 – 16, 2018 (making the world synthetic) in Vancouver (Canada)

While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.

Introduction

For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),

Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.

Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.[1] The recent paper acceptance rate for SIGGRAPH has been less than 26%.[2] The submitted papers are peer-reviewed in a single-blind process.[3] There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress.[4][5] …

This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014.  The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,

While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.

“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”

SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”

That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.

Previews

I have four items about work that will be presented at SIGGRAPH 2018, First up, something about ‘redirected walking’ from a June 18, 2018 Association for Computing Machinery news release on EurekAlert,

CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.

All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.

“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”

Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.

The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.

The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”

The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.

Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.

###

About ACM, ACM SIGGRAPH, and SIGGRAPH 2018

ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.

They have provided an image illustrating what they mean (I don’t find it especially informative),

Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn

Next up (2), there’s Disney Corporation’s first virtual reality (VR) short, from a July 19, 2018  Association for Computing Machinery news release on EurekAlert,

Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.

Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.

“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”

For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.

SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.

“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”

This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”

Apparently this is a still from the ‘short’,

Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios

There’s also something (3) from Google as described in a July 26, 2018 Association of Computing Machinery news release on EurekAlert,

Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.

Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.

“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”

To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.

Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec

to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.

The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)

Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.

Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.

“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”

I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,

Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck

Finally (4), ‘synthesizing realistic sounds’ is announced in an Aug. 6, 2018 Stanford University (US) news release (also on EurekAlert) by Taylor Kubota,

Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.

“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”

The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.

“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”

Predicting sound

Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.

Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.

“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.

The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.

Challenges ahead

In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.

And, even in its current state, the results are worth the wait.

“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”

Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.

Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.

Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,

The researchers have also provided this image,

By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)

It does seem like we’re synthesizing the world around us, eh?

The SIGGRAPH 2018 art gallery

Here’s what SIGGRAPH had to say about its 2018 art gallery in Vancouver and the themes for the conference and the gallery (from a May 18, 2018 Associating for Computing Machinery news release on globalnewswire.com (also on this 2018 SIGGRAPH webpage),

SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.

The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.

Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”

He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”

Highlights from the 2018 Art Gallery include:

Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver

TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.

Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara

Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”

Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University

Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.

In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.

The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.

To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.

“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.

Art Papers highlights include:

Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth

This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.

Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong

The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.

Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University

“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.

What’s the what?

My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.

The new knitting: electronics and batteries

Researchers from China have developed a new type of yarn for flexible electronics. A March 28, 2018 news item on Nanowerk announces the work, (Note: A link has been removed),

When someone thinks about knitting, they usually don’t conjure up an image of sweaters and scarves made of yarn that can power watches and lights. But that’s just what one group is reporting in ACS Nano (“Waterproof and Tailorable Elastic Rechargeable Yarn Zinc Ion Batteries by a Cross-Linked Polyacrylamide Electrolyte”). They have developed a rechargeable yarn battery that is waterproof and flexible. It also can be cut into pieces and still work.

A March 28, 2018 2018 American Chemical Society (ACS) news release (also on EurekAlert), which originated the news item, expands on the theme (Note: Links have been removed),

Most people are familiar with smartwatches, but for wearable electronics to progress, scientists will need to overcome the challenge of creating a device that is deformable, durable, versatile and wearable while still holding and maintaining a charge. One dimensional fiber or yarn has shown promise, since it is tiny, flexible and lightweight. Previous studies have had some success combining one-dimensional fibers with flexible Zn-MnO2 batteries, but many of these lose charge capacity and are not rechargeable. So, Chunyi Zhi and colleagues wanted to develop a rechargeable yarn zinc-ion battery that would maintain its charge capacity, while being waterproof and flexible.

The group twisted carbon nanotube fibers into a yarn, then coated one piece of yarn with zinc to form an anode, and another with magnesium oxide to form a cathode. These two pieces were then twisted like a double helix and coated with a polyacrylamide electrolyte and encased in silicone. Upon testing, the yarn zinc-ion battery was stable, had a high charge capacity and was rechargeable and waterproof. In addition, the material could be knitted and stretched. It also could be cut into several pieces, each of which could power a watch. In a proof-of-concept demonstration, eight pieces of the cut yarn battery were woven into a long piece that could power a belt containing 100 light emitting diodes (known as LEDs) and an electroluminescent panel.

The authors acknowledge funding from the National Natural Science Foundation of China and the Research Grants Council of Hong Kong Joint Research Scheme, City University of Hong Kong and the Sichuan Provincial Department of Science & Technology.

Here’s an image the researchers have used to illustrate their work,

 

Courtesy: American Chemical Society

Here’s a link to and a citation for the paper,

Waterproof and Tailorable Elastic Rechargeable Yarn Zinc Ion Batteries by a Cross-Linked Polyacrylamide Electrolyte by Hongfei Li, Zhuoxin Liu, Guojin Liang, Yang Huang, Yan Huang, Minshen Zhu, Zengxia Pe, Qi Xue, Zijie Tang, Yukun Wang, Baohua Li, and Chunyi Zhi. ACS Nano, Article ASAP DOI: 10.1021/acsnano.7b09003 Publication Date (Web): March 28, 2018

Copyright © 2018 American Chemical Society

This paper is behind a paywall.

Getting a more complete picture of aerosol particles at the nanoscale

What is in the air we breathe? In addition to the gases we learned about in school there are particles, not just the dust particles you can see, but micro- and nanoparticles too and scientists would like to know more about them.

An August 23, 2017 news item on Nanowerk features work which may help scientists in their quest,

They may be tiny and invisible, says Xiaoji Xu, but the aerosol particles suspended in gases play a role in cloud formation and environmental pollution and can be detrimental to human health.

Aerosol particles, which are found in haze, dust and vehicle exhaust, measure in the microns. One micron is one-millionth of a meter; a thin human hair is about 30 microns thick.

The particles, says Xu, are among the many materials whose chemical and mechanical properties cannot be fully measured until scientists develop a better method of studying materials at the microscale as well as the much smaller nanoscale (1 nm is one-billionth of a meter).

Xu, an assistant professor of chemistry, has developed such a method and utilized it to perform noninvasive chemical imaging of a variety of materials, as well as mechanical mapping with a spatial resolution of 10 nanometers.

The technique, called peak force infrared (PFIR) microscopy, combines spectroscopy and scanning probe microscopy. In addition to shedding light on aerosol particles, Xu says, PFIR will help scientists study micro- and nanoscale phenomena in a variety of inhomogeneous materials.

The lower portion of this image by Xiaoji Xu’s group shows the operational scheme of peak force infrared (PFIR) microscopy. The upper portion shows the topography of nanoscale PS-b-PMMA polymer islands on a gold substrate. (Image courtesy of Xiaoji Xu)

An August 22, 2017 Lehih University news release by Kurt Pfitzer (also on EurekAlert), which originated the news item, explains the research in more detail (Note: A link has been removed),

“Materials in nature are rarely homogeneous,” says Xu. “Functional polymer materials often consist of nanoscale domains that have specific tasks. Cellular membranes are embedded with proteins that are nanometers in size. Nanoscale defects of materials exist that affect their mechanical and chemical properties.

“PFIR microscopy represents a fundamental breakthrough that will enable multiple innovations in areas ranging from the study of aerosol particles to the investigation of heterogeneous and biological materials,” says Xu.

Xu and his group recently reported their results in an article titled “Nanoscale simultaneous chemical and mechanical imaging via peak force infrared microscopy.” The article was published in Science Advances, a journal of the American Association for the Advancement of Science, which also publishes Science magazine.

The article’s lead author is Le Wang, a Ph.D. student at Lehigh. Coauthors include Xu and Lehigh Ph.D. students Haomin Wang and Devon S. Jakob, as well as Martin Wagner of Bruker Nano in Santa Barbara, Calif., and Yong Yan of the New Jersey Institute of Technology.

“PFIR microscopy enables reliable chemical imaging, the collection of broadband spectra, and simultaneous mechanical mapping in one simple setup with a spatial resolution of ~10 nm,” the group wrote.

“We have investigated three types of representative materials, namely, soft polymers, perovskite crystals and boron nitride nanotubes, all of which provide a strong PFIR resonance for unambiguous nanochemical identification. Many other materials should be suited as well for the multimodal characterization that PFIR microscopy has to offer.

“In summary, PFIR microscopy will provide a powerful analytical tool for explorations at the nanoscale across wide disciplines.”

Xu and Le Wang also published a recent article about the use of PFIR to study aerosols. Titled “Nanoscale spectroscopic and mechanical characterization of individual aerosol particles using peak force infrared microscopy,” the article appeared in an “Emerging Investigators” issue of Chemical Communications, a journal of the Royal Society of Chemistry. Xu was featured as one of the emerging investigators in the issue. The article was coauthored with researchers from the University of Macau and the City University of Hong Kong, both in China.

PFIR simultaneously obtains chemical and mechanical information, says Xu. It enables researchers to analyze a material at various places, and to determine its chemical compositions and mechanical properties at each of these places, at the nanoscale.

“A material is not often homogeneous,” says Xu. “Its mechanical properties can vary from one region to another. Biological systems such as cell walls are inhomogeneous, and so are materials with defects. The features of a cell wall measure about 100 nanometers in size, placing them well within range of PFIR and its capabilities.”

PFIR has several advantages over scanning near-field optical microscopy (SNOM), the current method of measuring material properties, says Xu. First, PFIR obtains a fuller infrared spectrum and a sharper image—6-nm spatial resolution—of a wider variety of materials than does SNOM. SNOM works well with inorganic materials, but does not obtain as strong an infrared signal as the Lehigh technique does from softer materials such as polymers or biological materials.

“Our technique is more robust,” says Xu. “It works better with soft materials, chemical as well as biological.”

The second advantage of PFIR is that it can perform what Xu calls point spectroscopy.

“If there is something of interest chemically on a surface,” Xu says, “I put an AFM [atomic force microscopy] probe to that location to measure the peak-force infrared response.

“It is very difficult to obtain these spectra with current scattering-type scanning near-field optical microscopy. It can be done, but it requires very expensive light sources. Our method uses a narrow-band infrared laser and costs about $100,000. The existing method uses a broadband light source and costs about $300,000.”

A third advantage, says Xu, is that PFIR obtains a mechanical as well as a chemical response from a material.

“No other spectroscopy method can do this,” says Xu. “Is a material rigid or soft? Is it inhomogeneous—is it soft in one area and rigid in another? How does the composition vary from the soft to the rigid areas? A material can be relatively rigid and have one type of chemical composition in one area, and be relatively soft with another type of composition in another area.

“Our method simultaneously obtains chemical and mechanical information. It will be useful for analyzing a material at various places and determining its compositions and mechanical properties at each of these places, at the nanoscale.”

A fourth advantage of PFIR is its size, says Xu.

“We use a table-top laser to get infrared spectra. Ours is a very compact light source, as opposed to the much larger sizes of competing light sources. Our laser is responsible for gathering information concerning chemical composition. We get mechanical information from the AFM [atomic force microscope]. We integrate the two types of measurements into one device to simultaneously obtain two channels of information.”

Although PFIR does not work with liquid samples, says Xu, it can measure the properties of dried biological samples, including cell walls and protein aggregates, achieving a 10-nm spatial resolution without staining or genetic modification.

This looks like very exciting work.

Here are links and citations for both studies mentioned in the news release (the most recently published being cited first),

Nanoscale simultaneous chemical and mechanical imaging via peak force infrared microscopy by Le Wang, Haomin Wang, Martin Wagner, Yong Yan, Devon S. Jakob, and Xiaoji G. Xu. Science Advances 23 Jun 2017: Vol. 3, no. 6, e1700255 DOI: 10.1126/sciadv.1700255

Nanoscale spectroscopic and mechanical characterization of individual aerosol particles using peak force infrared microscopy by Le Wang, Dandan Huang, Chak K. Chan, Yong Jie Li, and Xiaoji G. Xu. Chem. Commun., 2017,53, 7397-7400 DOI: 10.1039/C7CC02301D First published on 16 Jun 2017

The June 23, 2017 paper is open access while the June 16, 2017 paper is behind a paywall.

Pancake bounce

What impact does a droplet make on a solid surface? It’s not the first question that comes to my mind but scientists have been studying it for over a century. From an Aug. 5, 2015 news item on Nanowerk (Note: A link has been removed),

Studies of the impact a droplet makes on solid surfaces hark back more than a century. And until now, it was generally believed that a droplet’s impact on a solid surface could always be separated into two phases: spreading and retracting. But it’s much more complex than that, as a team of researchers from City University of Hong Kong, Ariel University in Israel, and Dalian University of Technology in China report in the journal Applied Physics Letters, from AIP Publishing (“Controlling drop bouncing using surfaces with gradient features”).

An Aug. 4, 2015 American Institute of Physics news release (also on EurekAlert), which originated the news item, describes the impact in detail,

“During the spreading phase, the droplet undergoes an inertia-dominant acceleration and spreads into a ‘pancake’ shape,” explained Zuankai Wang, an associate professor within the Department of Mechanical and Biomedical Engineering at the City University of Hong Kong. “And during the retraction phase, the drop minimizes its surface energy and pulls back inward.”

Remarkably, on gold standard superhydrophobic–a.k.a. repellant–surfaces such as lotus leaves, droplets jump off at the end of the retraction stage due to the minimal energy dissipation during the impact process. This is attributed to the presence of an air cushion within the rough surface.

There exists, however, a classical limit in terms of the contact time between droplets and the gold standard superhydrophobic materials inspired by lotus leaves.

As the team previously reported in the journal Nature Physics, it’s possible to shape the droplet to bounce from the surface in a pancake shape directly at the end of the spreading stage without going through the receding process. As a result, the droplet can be shed away much faster.

“Interestingly, the contact time is constant under a wide range of impact velocities,” said Wang. “In other words: the contact time reduction is very efficient and robust, so the novel surface behaves like an elastic spring. But the real magic lies within the surface texture itself.”

To prevent the air cushion from collapsing or water from penetrating into the surface, conventional wisdom suggests the use of nanoscale posts with small inter-post spacings. “The smaller the inter-post spacings, the greater the impact velocity the small inter-post can withstand,” he elaborated. “By contrast, designing a surface with macrostructures–tapered sub-millimeter post arrays with a wide spacing–means that a droplet will shed from it much faster than any previously engineered materials.”

What the New Results Show

Despite exciting progress, rationally controlling the contact time and quantitatively predicting the critical Weber number–a number used in fluid mechanics to describe the ratio between deforming inertial forces and stabilizing cohesive forces for liquids flowing through a fluid medium–for the occurrence of pancake bouncing remained elusive.

So the team experimentally demonstrated that the drop bouncing is intricately influenced by the surface morphology. “Under the same center-to-center post spacing, surfaces with a larger apex angle can give rise to more pancake bouncing, which is characterized by a significant contact time reduction, smaller critical Weber number, and a wider Weber number range,” according to co-authors Gene Whyman and Edward Bormashenko, both professors at Ariel University.

Wang and colleagues went on to develop simple harmonic spring models to theoretically reveal the dependence of timescales associated with the impinging drop and the critical Weber number for pancake bouncing on the surface morphology. “The insights gained from this work will allow us to rationally design various surfaces for many practical applications,” he added.

The team’s novel surfaces feature a shortened contact time that prevents or slows ice formation. “Ice formation and its subsequent buildup hinder the operation of modern infrastructures–including aircraft, offshore oil platforms, air conditioning systems, wind turbines, power lines, and telecommunications equipment,” Wang said.

At supercooled temperatures, which involves lowering the temperature of a liquid or gas below its freezing point without it solidifying, the longer a droplet remains in contact with a surface before bouncing off the greater the chances are of it freezing in place. “Our new surface structure can be used to help prevent aircraft wings and engines from icing,” he said.

This is highly desirable, because a very light coating of snow or ice–light enough to be barely visible–is known to reduce the performance of airplanes and even cause crashes. One such disaster occurred in 2009, and called attention to the dangers of in-flight icing after it caused Air France Flight 447 flying from Rio de Janeiro to Paris to crash into the Atlantic Ocean.

Beyond anti-icing for aircraft, “turbine blades in power stations and wind farms can also benefit from an anti-icing surface by gaining a boost in efficiency,” he added.

As you can imagine, this type of nature-inspired surface shows potential for a tremendous range of other applications as well–everything from water and oil separation to disease transmission prevention.

The next step for the team? To “develop bioinspired ‘active’ materials that are adaptive to their environments and capable of self-healing,” said Wang.

Here’s a link to and a citation for the paper,

Controlling drop bouncing using surfaces with gradient features by Yahua Liu, Gene Whyman, Edward Bormashenko, Chonglei Hao, and Zuankai Wang. Appl. Phys. Lett. 107, 051604 (2015); http://dx.doi.org/10.1063/1.4927055

This paper appears to be open access.

Finally, here’s an illustration of the pancake bounce,

Droplet hitting tapered posts shows “pancake” bouncing characterized by lifting off the surface of the end of spreading without retraction. Credit- Z.Wang/HKU

Droplet hitting tapered posts shows “pancake” bouncing characterized by lifting off the surface of the end of spreading without retraction. Credit- Z.Wang/HKU

There is also a pancake bounce video which you can view here on EurekAlert.

Call for papers (IEEE [Institute for Electrical and Electronics Engineers] 10th annual NEMS conference in 2015

The deadline for submissions is Nov. 15, 2014 and here’s more from the notice on the IEEE [Institute for Electrical and Electronics Engineers] website for the IEEE-NEMS [nano/micro engineered and moecular systems] 2015,

The 10th Annual IEEE International Conference on Nano/ Micro Engineered and Molecular Systems (IEEE-NEMS 2015)
Xi’an, China
April 7-11, 2015
http://www.ieee-nems.org/2015/

The IEEE International Conference on Nano/Micro Engineered and Molecular Systems (IEEE-NEMS) is a series of successful conferences that began in Zhuhai, China in 2006, and has been a premier IEEE annual conference series held mostly in Asia which focuses on MEMS, nanotechnology, and molecular technology. Prior conferences were held in Waikiki Beach (USA, 2014), Suzhou (China, 2013), Kyoto (Japan, 2012), Kaohsiung (Taiwan, 2011), Xiamen (China, 2010), Shenzhen (China, 2009), Hainan Island (China, 2008), Bangkok (Thailand, 2007), and Zhuhai (China, 2006). The conference typically has ~350 attendees with participants from more than 20 countries and regions world-wide.

In 2015, the conference will be held in Xi’an, one of the great ancient capitals of China. Xi’an has more than 3,100 years of history, and was known as Chang’an before the Ming dynasty. Xi’an is the starting point of the Silk Road and home to the Terracotta Army of Emperor Qin Shi Huang.

We now invite contributions describing the latest scientific and technological research results in subjects including, but are not limited to:

  • Nanophotonics
  • Nanomaterials
  • Nanobiology, Nanomedicine, Nano-bio-informatics
  • Micro/Nano Fluidics, BioMEMS, and Lab-on-Chips
  • Molecular Sensors, Actuators, and Systems
  • Micro/Nano Sensors, Actuators, and Systems
  • Carbon Nanotube/Graphene/Diamond based Devices
  • Micro/Nano/Molecular Heat Transfer & Energy Conversion
  • Micro/Nano/Molecular Fabrication
  • Nanoscale Metrology
  • Micro/Nano Robotics, Assembly & Automation
  • Integration & Application of MEMS/NEMS
  • Flexible MEMS, Sensors and Printed Electronics
  • Commercialization of MEMS/NEMS/Nanotechnology
  • Nanotechnology Safety and Education

Important Dates:

Nov. 15, 2014 – Abstract/Full Paper Submission
Dec. 31, 2014 – Notification of Acceptance
Jan. 31, 2015 – Final Full Paper Submission

We hope to see you at Xi’an, China, in April 2015!

General Chair: Ning Xi, Michigan State University, USA
Program Chair: Guangyong Li, University of Pittsburgh, USA
Organizing Chair: Wen J. Li, City University of Hong Kong, Hong Kong
Local Arrangement Chair: Xiaodong Zhang, Xi’an Jiaotong University, China

The 2015 IEEE-NEMS webpage offers more general information about the conference,

The IEEE-NEMS is a key conference series sponsored by the IEEE Nanotechnology Council focusing on advanced research areas related to MEMS, nanotechnology, and molecular technology. … The conference typically has ~350 attendees with participants from more than 20 countries and regions world-wide.

Good luck!

Things falling apart: both a Nigerian novel and research at the Massachusetts Intitute of Technology

First the Nigerian novel ‘Things Fall Apart‘ (from its Wikipedia entry; Note: Links have been removed),

Things Fall Apart is an English-language novel by Nigerian author Chinua Achebe published in 1958 by William Heinemann Ltd in the UK; in 1962, it was also the first work published in Heinemann’s African Writers Series. Things Fall Apart is seen as the archetypal modern African novel in English, one of the first to receive global critical acclaim. It is a staple book in schools throughout Africa and is widely read and studied in English-speaking countries around the world. The title of the novel comes from William Butler Yeats’ poem “The Second Coming”.[1]

For those unfamiliar with the Yeats poem, this is the relevant passage (from Wikipedia entry for The Second Coming),

Turning and turning in the widening gyre
The falcon cannot hear the falconer;
Things fall apart; the centre cannot hold;
Mere anarchy is loosed upon the world,
The blood-dimmed tide is loosed, and everywhere
The ceremony of innocence is drowned;
The best lack all conviction, while the worst
Are full of passionate intensity.

The other ‘Things fall apart’ item, although it’s an investigation into ‘how things fall apart’, is mentioned in an Aug. 4, 2014 news item on Nanowerk,

Materials that are firmly bonded together with epoxy and other tough adhesives are ubiquitous in modern life — from crowns on teeth to modern composites used in construction. Yet it has proved remarkably difficult to study how these bonds fracture and fail, and how to make them more resistant to such failures.

Now researchers at MIT [Massachusetts Institute of Technology] have found a way to study these bonding failures directly, revealing the crucial role of moisture in setting the stage for failure. Their findings are published in the journal Proceedings of the National Academy of Science in a paper by MIT professors of civil and environmental engineering Oral Buyukozturk and Markus Buehler; research associate Kurt Broderick of MIT’s Microsystems Technology Laboratories; and doctoral student Denvid Lau, who has since joined the faculty at the City University of Hong Kong.

An Aug. 4, 2014 MIT news release written by David Chandler (also on EurekAlert), which originated the news item, provides an unexpectedly fascinating discussion of bonding, interfaces, and infrastructure,

“The bonding problem is a general problem that is encountered in many disciplines, especially in medicine and dentistry,” says Buyukozturk, whose research has focused on infrastructure, where such problems are also of great importance. “The interface between a base material and epoxy, for example, really controls the properties. If the interface is weak, you lose the entire system.”

“The composite may be made of a strong and durable material bonded to another strong and durable material,” Buyukozturk adds, “but where you bond them doesn’t necessarily have to be strong and durable.”

Besides dental implants and joint replacements, such bonding is also critical in construction materials such as fiber-reinforced polymers and reinforced concrete. But while such materials are widespread, understanding how they fail is not simple.

There are standard methods for testing the strength of materials and how they may fail structurally, but bonded surfaces are more difficult to model. “When we are concerned with deterioration of this interface when it is degraded by moisture, classical methods can’t handle that,” Buyukozturk says. “The way to approach it is to look at the molecular level.”

When such systems are exposed to moisture, “it initiates new molecules at the interface,” Buyukozturk says, “and that interferes with the bonding mechanism. How do you assess how weak the interface becomes when it is affected? We came up with an innovative method to assess the interface weakening as a result of exposure to environmental effects.”

The team used a combination of molecular simulations and laboratory tests in its assessment. The modeling was based on fundamental principles of molecular interactions, not on empirical data, Buyukozturk says.

In the laboratory tests, Buyukozturk and his colleagues controlled the residual stresses in a metal layer that was bonded and then forcibly removed. “We validated the method, and showed that moisture has a degrading effect,” he says.

The findings could lead to exploration of new ways to prevent moisture from reaching into the bonded layer, perhaps using better sealants. “Moisture is the No. 1 enemy,” Buyukozturk says.

“I think this is going to be an important step toward assessment of the bonding, and enable us to design more durable composites,” he adds. “It gives a quantitative knowledge of the interface” — for example, predicting that under specific conditions, a given bonded material will lose 30 percent of its strength.

Interface problems are universal, Buyukozturk says, occurring in many areas besides biomedicine and construction. “They occur in mechanical devices, in aircraft, electrical equipment, in the packaging of electronic components,” he says. “We feel this will have very broad applications.”

Bonded composite materials are beginning to be widely used in airplane manufacturing; often these composites are then bonded to traditional materials, like aluminum. “We have not had enough experience to prove the durability of these composite systems is going to be there after 20 years,” Buyukozturk says.

Here’s a link to and a citation for the research paper,

A robust nanoscale experimental quantification of fracture energy in a bilayer material system by Denvid Lau, Kurt Broderick, Markus J. Buehler, and Oral Büyüköztürk. PNAS, doi: 10.1073/pnas.1402893111 published August 5, 2014

This paper is behind a paywall.