Tag Archives: Hong Kong

SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) and their art gallery from Aug. 12 – 16, 2018 (making the world synthetic) in Vancouver (Canada)

While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.

Introduction

For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),

Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.

Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.[1] The recent paper acceptance rate for SIGGRAPH has been less than 26%.[2] The submitted papers are peer-reviewed in a single-blind process.[3] There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress.[4][5] …

This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014.  The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,

While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.

“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”

SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”

That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.

Previews

I have four items about work that will be presented at SIGGRAPH 2018, First up, something about ‘redirected walking’ from a June 18, 2018 Association for Computing Machinery news release on EurekAlert,

CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.

All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.

“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”

Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.

The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.

The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”

The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.

Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.

###

About ACM, ACM SIGGRAPH, and SIGGRAPH 2018

ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.

They have provided an image illustrating what they mean (I don’t find it especially informative),

Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn

Next up (2), there’s Disney Corporation’s first virtual reality (VR) short, from a July 19, 2018  Association for Computing Machinery news release on EurekAlert,

Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.

Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.

“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”

For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.

SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.

“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”

This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”

Apparently this is a still from the ‘short’,

Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios

There’s also something (3) from Google as described in a July 26, 2018 Association of Computing Machinery news release on EurekAlert,

Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.

Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.

“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”

To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.

Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec

to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.

The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)

Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.

Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.

“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”

I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,

Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck

Finally (4), ‘synthesizing realistic sounds’ is announced in an Aug. 6, 2018 Stanford University (US) news release (also on EurekAlert) by Taylor Kubota,

Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.

“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”

The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.

“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”

Predicting sound

Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.

Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.

“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.

The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.

Challenges ahead

In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.

And, even in its current state, the results are worth the wait.

“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”

Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.

Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.

Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,

The researchers have also provided this image,

By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)

It does seem like we’re synthesizing the world around us, eh?

The SIGGRAPH 2018 art gallery

Here’s what SIGGRAPH had to say about its 2018 art gallery in Vancouver and the themes for the conference and the gallery (from a May 18, 2018 Associating for Computing Machinery news release on globalnewswire.com (also on this 2018 SIGGRAPH webpage),

SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.

The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.

Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”

He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”

Highlights from the 2018 Art Gallery include:

Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver

TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.

Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara

Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”

Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University

Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.

In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.

The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.

To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.

“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.

Art Papers highlights include:

Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth

This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.

Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong

The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.

Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University

“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.

What’s the what?

My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.

Nanofibrous fish skins for wrinkle-free skin (New Zealand’s biggest seafood company moves into skincare)

I am utterly enchanted by this venture employing fish skins and nanotechnology-based processes for a new line of skin care products and, they hope, medical applications,


For those who like text (from a May 21, 2018 Sanford media advisory),

Nanofibre magic turns fish skins into wrinkle busting skin care

Sanford partners with kiwi nanotech experts to help develop a wrinkle-busting skincare product made from Hoki skins.

New Zealand’s biggest and oldest seafood company is moving into the future of skincare and medicine by becoming supporting partner to West Auckland nanofibre producer Revolution Fibres, which is launching a potentially game-changing nanotech face mask.

The actiVLayr face masks use collagen extracted from fish skins as a base ingredient which is then combined with elements such as fruit extracts and hyaluronic acid to make a 100 percent natural and sustainably sourced product.

They have achieved stunning results in third party tests which show that the nanofiber masks can reduce wrinkles by up to 31.5%.*

Revolution Fibres CEO Iain Hosie says it is no exaggeration to say the masks could be revolutionary.

“The wayactiVLayr is produced, and the unique application method of placing it onto wet skin like a mask, means ingredients are absorbed quickly and efficiently into the skin to maximise the repair and protection of the skin.”

Sanford is delighted to support the work that Revolution Fibres is doing by supplying hoki fish skins. Hoki is a sustainably caught fish and its skin has some unique properties.

Sanford’s General Manager of Innovation, Andrew Stanley, says these properties make it ideal for the actiVLayr technology. “Hoki skins are rich in collagen, which is an essential part of our bodies. But their marine collagen is unique – it has a very low melt point, so when placed on the skin, it can dissolve completely and be absorbed in a way that collagen f rom other animals cannot.”

Sanford’s Chief Customer Officer, Andre Gargiulo, says working with the team at Revolution Fibres is a natural fit, because both company’s think about innovation and sustainability in the same way.

“We hope actiVLayr gets the global attention it deserves, and we’re delighted that our sustainably caught Hoki is part of this fantastic New Zealand product. It’s exactly what we’re all about at Sanford – making the most of the precious resources from the sea, working in a sustainable way and getting the most value out of the goodness we harvest from nature.”

Sanford’s Business Development Manager Adrian Grey says the focus on sustainability and value creation are so important for the seafood company.

“Previously we have been making use of these hoki skins, which is great, but they were being used only for fish meal or pet food products. Being able to supply and support a high tech company that is going to earn increased export revenue for New Zealand is just fantastic. And the product created is completely natural, harvested from a globally certified sustainable fishery.”

Sanford provides the hoki skins and then turns these skins into pure collagen using the science and skills of the team at Plant and Food in Nelson [New Zealand for those of us who associate Nelson with British Columbia]. Revolution Fibres transforms the Sanford product into nanofibre using a technique called electrospinning of which Revolution Fibres are the New Zealand pioneers.

During the electrospinning process natural ingredients known as “bioactives” (such as kiwifruit and grapes) and hyaluronic acid (an ingredient to help the skin retain moisture) are bonded to the nanofibres to create sheets of actiVLayr. When it is exposed to wet skin the nanofibres dissolve rapidly and release the bioactives deep into the skin.

The product is being launched at the China Beauty Fair in Shanghai on May 22 [2018] and will go on sale in China this month followed by Hong Kong and New Zealand later in the year.   Revolution Fibres CEO Iain Hosie says there is big demand for unique delivery systems of natural skin and beauty products such as actiVLayr in Asia, which was the key reason to launch the product in China. But his view of the future is even bigger.

“There are endless uses for actiVLayr and the one we’re most proud of is in the medical area with the ability for drug compounds or medicines to be added to the actiVLayr formula. It will enable a controlled dose to be delivered to a patient with skin lesions, burns or acne.”

Revolution Fibres is presenting at Techweek NZ as part of The Fourth Revolution event on May 25 [2018] in Christchurch which introduces high tech engineers who are building a better place.

*Testing conducted by Easy Care using VISIA Complexion Analysis

The media advisory also includes some ‘fascinating ‘facts’,

1kg of hoki skin produces 400 square meters of nanofibre material

Nanofibres are 1/500th the width of a human hair

Revolution Fibres is the only nanofibre producer in the world to meet aerospace industry standards with its AS9100d quality assurance certification

The marine collagen found in hoki skins is unique because of its relatively low melt point, meaning it can dissolve at a lower temperature which makes it perfect for human use

Revolution Fibres is based in West Auckland and employs 12 people, of which 4 have P hDs in science related to nanotechnology. There are also a number of employees with strong engineering backgrounds to complement the company’s Research & Development expertise

Sanford is New Zealand’s oldest and biggest seafood company. It was founded by Albert Sanford in Auckland in 1904

New Zealand’s hoki fishery is certified as sustainable by the London-based Marine Stewardship Council, which audits fisheries all over the world

You can find Sanford here and Revolution Fibres here.

For some perspective on the business side of things, there’s a May 21, 2018 article by Nikki Mandow for newsroom.co.nz,

Revolution Fibres first started talking about the possibility of a collagen nanofibre made from hoki almost a decade ago, as part of a project with Plant & Food’s Seafood Research Centre in Nelson, Hosie [Revolution Fibres CEO Iain Hosie] said, and the company got serious about making a product in 2013.

Previously, the hoki waste skins were used for fish meal and pet food, said Sanford business development manager Adrian Grey.

“Being able to supply and support a high tech company that is going to earn increased export revenue for New Zealand is just fantastic.”

Revolution Fibres also manufactures nanofibres for a number of other uses. These include anti-dust mite pillow coverings, anti-pollution protective face masks, filters for pumps for HRV’s home ventilation systems, and reinforcing material for carbon fibre for fishing rods. The latter product is made from recycled fishing nets collected from South America.

He [Revolution Fibres CEO Iain Hosie] said the company could be profitable, but instead has chosen to continue to invest heavily in research and development.

About 75 percent of revenue comes from selling proprietary products, but increasingly Hosie said the company is working on “co-innovation” projects, where Revolution Fibres manufactures bespoke materials for outside companies.

Revolution Fibres completed its first external funding round last year, raising $1.5 million from the US, and it has just completed another round worth approximately $1million. Hosie, one of the founders, still holds around 20 percent of the company.

He said he hopes to keep the intellectual property in New Zealand, although manufacturing of some products is likely to move closer to their markets – China and the US potentially. However, he said actiVLayr manufacture will remain in New Zealand, because that’s where the raw hoki comes from.

I wonder if we’ll see this product in Canada.

One other thing,  I was curious about this ” … the nanofiber masks can reduce wrinkles by up to 31.5%”  and Visia Complexion Analysis, which is a product from Canfield Scientific, a company specializing in imaging.  Here’s some of what Visia can do (from the Visia product page),

Percentile Scores

Percentile Scores

VISIA’s patented comparison to norms analysis uses the world’s largest skin feature database to grade your patient’s skin relative to others of the same age and skin type. Measure spots, wrinkles, texture, pores, UV spots, brown spots, red areas, and porphyrins.

Meaningful Comparisons

Meaningful Comparisons

Compare results side by side for any combination of views, features or time points, including graphs and numerical data. Zoom and pan images in tandem for clear and easy comparisons.

And, there’s my personal favourite (although it has nothing to do with the topic of this posting0,

Eyelash Analysis

Eyelash Analysis

Evaluates the results of lash improvement treatments with numerical assessments and graphic visualizations.

For anyone who wondered about why the press release has both ‘nanofibre’ and ‘nanofiber’, It’s the difference between US and UK spelling. Perhaps the complexion analysis information came from a US company or one that uses US spellings.

A method for producing two-dimensional quasicrystals from metal organic networks

A July 13, 2016 news item on ScienceDaily highlights an advance where quasicrystals are concerned,

Unlike classical crystals, quasicrystals do not comprise periodic units, even though they do have a superordinate structure. The formation of the fascinating mosaics that they produce is barely understood. In the context of an international collaborative effort, researchers at the Technical University of Munich (TUM) have now presented a methodology that allows the production of two-dimensional quasicrystals from metal-organic networks, opening the door to the development of promising new materials.

A July 13, 2016 TUM press release (also on EurekAlert), which originated the news item, explains further,

Physicist Daniel Shechtman [emphasis mine] merely put down three question marks in his laboratory journal, when he saw the results of his latest experiment one day in 1982. He was looking at a crystalline pattern that was considered impossible at the time. According to the canonical tenet of the day, crystals always had so-called translational symmetry. They comprise a single basic unit, the so-called elemental cell, that is repeated in the exact same form in all spatial directions.

Although Shechtman’s pattern did contain global symmetry, the individual building blocks could not be mapped onto each other merely by translation. The first quasicrystal had been discovered. In spite of partially stark criticism by reputable colleagues, Shechtman stood fast by his new concept and thus revolutionized the scientific understanding of crystals and solid bodies. In 2011 he ultimately received the Nobel Prize in Chemistry. To this day, both the basic conditions and mechanisms by which these fascinating structures are formed remain largely shrouded in mystery.

A toolbox for quasicrystals

Now a group of scientists led by Wilhelm Auwärter and Johannes Barth, both professors in the Department of Surface Physics at TU Munich, in collaboration with Hong Kong University of Science and Technology (HKUST, Prof. Nian Lin, et al) and the Spanish research institute IMDEA Nanoscience (Dr. David Écija), have developed a new basis for producing two-dimensional quasicrystals, which might bring them a good deal closer to understanding these peculiar patterns.

The TUM doctoral candidate José Ignacio Urgel made the pioneering measurements in the course of a research fellowship at HKUST. “We now have a new set of building blocks that we can use to assemble many different new quasicrystalline structures. This diversity allows us to investigate on how quasicrystals are formed,” explain the TUM physicists.

The researchers were successful in linking europium – a metal atom in the lanthanide series – with organic compounds, thereby constructing a two-dimensional quasicrystal that even has the potential to be extended into a three-dimensional quasicrystal. To date, scientists have managed to produce many periodic and in part highly complex structures from metal-organic networks, but never a quasicrystal.

The researchers were also able to thoroughly elucidate the new network geometry in unparalleled resolution using a scanning tunnelling microscope. They found a mosaic of four different basic elements comprising triangles and rectangles distributed irregularly on a substrate. Some of these basic elements assembled themselves to regular dodecagons that, however, cannot be mapped onto each other through parallel translation. The result is a complex pattern, a small work of art at the atomic level with dodecagonal symmetry.

Interesting optical and magnetic properties

In their future work, the researchers are planning to vary the interactions between the metal centers and the attached compounds using computer simulation and experiments in order to understand the conditions under which two-dimensional quasicrystals form. This insight could facilitate the future development of new tailored quasicrystalline layers.

These kinds of materials hold great promise. After all, the new metal-organic quasicrystalline networks may have properties that make them interesting in a wide variety of application. “We have discovered a new playing field on which we can not only investigate quasicrystallinity, but also create new functionalities, especially in the fields of optics and magnetism,” says Dr. David Écija of IMDEA Nanoscience.

For one, scientists could one day use the new methodology to create quasicrystalline coatings that influence photons in such a manner that they are transmitted better or that only certain wavelengths can pass through the material.

In addition, the interactions of the lanthanide building blocks in the new quasicrystals could facilitate the development of magnetic systems with very special properties, so-called “frustrated systems”. Here, the individual atoms in a crystalline grid interfere with each other in a manner that prevents grid points from achieving a minimal energy state. The result: exotic magnetic ground states that can be investigated as information stores for future quantum computers.

The researchers have made an image available,

The quasicrystalline network built up with europium atoms linked with para-quaterphenyl–dicarbonitrile on a gold surface (yellow) - Image: Carlos A. Palma / TUM

The quasicrystalline network built up with europium atoms linked with para-quaterphenyl–dicarbonitrile on a gold surface (yellow) – Image: Carlos A. Palma / TUM

Here’s a link to and a citation for the paper,

Quasicrystallinity expressed in two-dimensional coordination networks by José I. Urgel, David Écija, Guoqing Lyu, Ran Zhang, Carlos-Andres Palma, Willi Auwärter, Nian Lin, & Johannes V. Barth. Nature Chemistry 8, 657–662 (2016) doi:10.1038/nchem.2507 Published online 16 May 2016

This paper is behind a paywall.

For anyone interested in more about the Daniel Schechter story and how he was reviled for his discovery of quasicrystals, there’s more in my Dec. 24, 2013 posting (scroll down about 60% of the way).

2-D melting and surfacing premelting of a single particle

Scientists at the Hong Kong University of Science and Technology (HKUST) and the University of Amsterdam (in the Netherlands) have measured surface premelting with single particle resolution. From a March 15, 2016 HKUST news release on EurekAlert,

The surface of a solid often melts into a thin layer of liquid even below its melting point. Such surface premelting is prevalent in all classes of solids; for instance, two pieces of ice can fuse below 0°C because the premelted surface water becomes embedded inside the bulk at the contact point and thus freeze. Premelting facilitates crystal growth and is critical in metallurgy, geology, and meteorology such as glacier movement, frost heave, snowflake growth and skating. However, the causative factors of various premelting scenarios, and the effect of dimensionality on premelting are poorly understood due to the lack of microscopic measurements.

To this end, researchers from the Hong Kong University of Science and Technology (HKUST) and University of Amsterdam conducted a research where they were able to measure surface premelting with single-particle resolution for the first time by using novel colloidal crystals. They found that dimensionality is crucial to bulk melting and bulk solid-solid transitions, which strongly affect surface melting behaviors. To the surprise of the researchers, they found that a crystal with free surfaces (solid-vapor interface) melted homogenously from both surfaces and within the bulk, in contrast to the commonly assumed heterogeneous melting from surfaces. These observations would provide new challenges on premelting and melting theories.

The research team was led by associate professor of physics Yilong Han and graduate student Bo Li from HKUST. HKUST graduate students Feng Wang, Di Zhou, Yi Peng, and postdoctoral researcher Ran Ni from University of Amsterdam in Netherlands also participated in the research.

Micrometer sized colloidal spheres in liquid suspensions have been used as powerful model systems for the studies of phase transitions because the thermal-motion trajectories of these “big atoms” can be directly visualized under an optical microscope. “Previous studies mainly used repulsive colloids, which cannot form stable solid-vapor interfaces,” said Han. “Here, we made a novel type colloid with temperature-sensitive attractions which can better mimic atoms, since all atoms have attractions, or otherwise they cannot condense into stable solid in air. We assembled these attractive spheres into large well-tunable two-dimensional colloidal crystals with free surfaces for the first time.

“This paves the way to study surface physics using colloidal model systems. Our first project along this direction is about surface premelting, which was poorly understood before. Surprisingly, we found that it is also related to bulk melting and solid-solid transitions,” Han added.

The team found that two-dimensional (2D) monolayer crystals premelted into a thin layer of liquid with a constant thickness, an exotic phenomenon known as incomplete blocked premelting. By contrast, the surface-liquid thickness of the two- or three-layer thin-film crystal increased to infinity as it approaches its melting point, i.e. a conventional complete premelting. Such blocked surface premelting has been occasionally observed, e.g. in ice and germanium, but lacks theoretical explanations.

“Here, we found that the premelting of the 2D crystal was triggered by an abrupt lattice dilation because the crystal can no longer provide enough attractions to surface particles after a drop in density.” Li said. “Before the surface liquid grew thick, the bulk crystal collapsed and melted due to mechanical instability. This provides a new simple mechanism for blocked premelting. The two-layer crystals are mechanically stable because particles have more neighbors. Thus they exhibit a conventional surface melting.”

As an abrupt dilation does not change the lattice symmetry, this is an isostructural solid-solid transition, which usually occurs in metallic and multiferroic materials. The colloidal system provides the first experimental observation of isostructural solid-solid transition at the single-particle level.

The mechanical instability induced a homogenous melting from within the crystal rather than heterogeneous melting from the surface. “We observed that the 2D melting is a first-order transition with a homogeneous proliferation of grain boundaries, which confirmed the grain-boundary-mediated 2D melting theory.” said Han. “First-order 2D melting has been observed in some molecular monolayers, but the theoretically predicted grain-boundary formation has not been observed before.”

Here’s a link to and a citation for the paper,

Imaging the Homogeneous Nucleation During the Melting of Superheated Colloidal Crystals by Ziren Wang, Feng Wang, Yi Peng, Zhongyu Zheng, Yilong Han. Science  05 Oct 2012:
Vol. 338, Issue 6103, pp. 87-90 DOI: 10.1126/science.1224763

This paper is behind a paywall.

A new ink for energy storage devices from the Hong Kong Polytechnic University

Energy storage is not the first thought that leaps to mind when ink is mentioned. Live and learn, eh? A Sept. 23, 2015 news item on Nanowerk describes the connection (Note: A link has been removed),

 The Department of Applied Physics of The Hong Kong Polytechnic University (PolyU) has developed a simple approach to synthesize novel environmentally friendly manganese dioxide ink by using glucose (“Aqueous Manganese Dioxide Ink for Paper-Based Capacitive Energy Storage Devices”).

The MnO2 ink could be used for the production of light, thin, flexible and high performance energy storage devices via ordinary printing or even home-used printers. The capacity of the MnO2 ink supercapacitor is more than 30 times higher than that of a commercial capacitor of the same weight of active material (e.g. carbon powder), demonstrating the great potential of MnO2 ink in significantly enhancing the performances of energy storage devices, whereas its production cost amounts to less than HK$1.

A Sept. 23, 2015 PolyU media release, which originated the news item, expands on the theme,

MnO2 is a kind of environmentally-friendly material and it is degradable. Given the environmental compatibility and high potential capacity of MnO2, it has always been regarded as an ideal candidate for the electrode materials of energy storage devices. The conventional MnO2 electrode preparation methods suffer from high cost, complicated processes and could result in agglomeration of the MnO2 ink during the coating process, leading to the reduction of electrical conductivity. The PolyU research team has developed a simple approach to synthesize aqueous MnO2 ink. Firstly, highly crystalline carbon particles were prepared by microwave hydrothermal method, followed by a morphology transmission mechanism at room temperature. The MnO2 ink can be coated on various substrates, such as conductive paper, plastic and glass. Its thickness and weight can also be controlled for the production of light, thin, transparent and flexible energy storage devices. Substrates coated by MnO2 ink can easily be erased if required, facilitating the fabrication of electronic devices.

PolyU researchers coated the MnO2 ink on conductive A4 paper and fabricated a capacitive energy storage device with maximum energy density and power density amounting to 4 mWh•cm-3 and 13 W•cm-3 respectively. The capacity of the MnO2 ink capacitor is more than 30 times higher than that of a commercial capacitor of the same weight of active material (e.g. carbon powder), demonstrating the great potential of MnO2 ink in significantly enhancing the performances of energy storage devices. Given the small size, light, thin, flexible and high energy capacity properties of the MnO2 ink energy storage device, it shows a potential in wide applications. For instance, in wearable devices and radio-frequency identification systems, the MnO2 ink supercapacitor could be used as the power sources for the flexible and “bendable” display panels, smart textile, smart checkout tags, sensors, luggage tracking tags, etc., thereby contributing to the further development of these two areas.

The related paper has been recently published on Angewandte Chemie International Edition, a leading journal in Chemistry. The research team will work to further improve the performance of the MnO2 ink energy storage device in the coming two years, with special focus on increasing the voltage, optimizing the structure and synthesis process of the device. In addition, further tests will be conducted to integrate the MnO2 ink energy storage device with other energy collection systems.

Here’s a link to and a citation for the paper,

Aqueous Manganese Dioxide Ink for Paper-Based Capacitive Energy Storage Devices by Jiasheng Qian, Huanyu Jin, Dr. Bolei Chen, Mei Lin, Dr. Wei Lu, Dr. Wing Man Tang, Dr. Wei Xiong, Prof. Lai Wa Helen Chan, Prof. Shu Ping Lau, and Dr. Jikang Yuan. Angewandte Chemie International Edition Volume 54, Issue 23, pages 6800–6803, June 1, 2015 DOI: 10.1002/anie.201501261 Article first published online: 17 APR 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Pancake bounce

What impact does a droplet make on a solid surface? It’s not the first question that comes to my mind but scientists have been studying it for over a century. From an Aug. 5, 2015 news item on Nanowerk (Note: A link has been removed),

Studies of the impact a droplet makes on solid surfaces hark back more than a century. And until now, it was generally believed that a droplet’s impact on a solid surface could always be separated into two phases: spreading and retracting. But it’s much more complex than that, as a team of researchers from City University of Hong Kong, Ariel University in Israel, and Dalian University of Technology in China report in the journal Applied Physics Letters, from AIP Publishing (“Controlling drop bouncing using surfaces with gradient features”).

An Aug. 4, 2015 American Institute of Physics news release (also on EurekAlert), which originated the news item, describes the impact in detail,

“During the spreading phase, the droplet undergoes an inertia-dominant acceleration and spreads into a ‘pancake’ shape,” explained Zuankai Wang, an associate professor within the Department of Mechanical and Biomedical Engineering at the City University of Hong Kong. “And during the retraction phase, the drop minimizes its surface energy and pulls back inward.”

Remarkably, on gold standard superhydrophobic–a.k.a. repellant–surfaces such as lotus leaves, droplets jump off at the end of the retraction stage due to the minimal energy dissipation during the impact process. This is attributed to the presence of an air cushion within the rough surface.

There exists, however, a classical limit in terms of the contact time between droplets and the gold standard superhydrophobic materials inspired by lotus leaves.

As the team previously reported in the journal Nature Physics, it’s possible to shape the droplet to bounce from the surface in a pancake shape directly at the end of the spreading stage without going through the receding process. As a result, the droplet can be shed away much faster.

“Interestingly, the contact time is constant under a wide range of impact velocities,” said Wang. “In other words: the contact time reduction is very efficient and robust, so the novel surface behaves like an elastic spring. But the real magic lies within the surface texture itself.”

To prevent the air cushion from collapsing or water from penetrating into the surface, conventional wisdom suggests the use of nanoscale posts with small inter-post spacings. “The smaller the inter-post spacings, the greater the impact velocity the small inter-post can withstand,” he elaborated. “By contrast, designing a surface with macrostructures–tapered sub-millimeter post arrays with a wide spacing–means that a droplet will shed from it much faster than any previously engineered materials.”

What the New Results Show

Despite exciting progress, rationally controlling the contact time and quantitatively predicting the critical Weber number–a number used in fluid mechanics to describe the ratio between deforming inertial forces and stabilizing cohesive forces for liquids flowing through a fluid medium–for the occurrence of pancake bouncing remained elusive.

So the team experimentally demonstrated that the drop bouncing is intricately influenced by the surface morphology. “Under the same center-to-center post spacing, surfaces with a larger apex angle can give rise to more pancake bouncing, which is characterized by a significant contact time reduction, smaller critical Weber number, and a wider Weber number range,” according to co-authors Gene Whyman and Edward Bormashenko, both professors at Ariel University.

Wang and colleagues went on to develop simple harmonic spring models to theoretically reveal the dependence of timescales associated with the impinging drop and the critical Weber number for pancake bouncing on the surface morphology. “The insights gained from this work will allow us to rationally design various surfaces for many practical applications,” he added.

The team’s novel surfaces feature a shortened contact time that prevents or slows ice formation. “Ice formation and its subsequent buildup hinder the operation of modern infrastructures–including aircraft, offshore oil platforms, air conditioning systems, wind turbines, power lines, and telecommunications equipment,” Wang said.

At supercooled temperatures, which involves lowering the temperature of a liquid or gas below its freezing point without it solidifying, the longer a droplet remains in contact with a surface before bouncing off the greater the chances are of it freezing in place. “Our new surface structure can be used to help prevent aircraft wings and engines from icing,” he said.

This is highly desirable, because a very light coating of snow or ice–light enough to be barely visible–is known to reduce the performance of airplanes and even cause crashes. One such disaster occurred in 2009, and called attention to the dangers of in-flight icing after it caused Air France Flight 447 flying from Rio de Janeiro to Paris to crash into the Atlantic Ocean.

Beyond anti-icing for aircraft, “turbine blades in power stations and wind farms can also benefit from an anti-icing surface by gaining a boost in efficiency,” he added.

As you can imagine, this type of nature-inspired surface shows potential for a tremendous range of other applications as well–everything from water and oil separation to disease transmission prevention.

The next step for the team? To “develop bioinspired ‘active’ materials that are adaptive to their environments and capable of self-healing,” said Wang.

Here’s a link to and a citation for the paper,

Controlling drop bouncing using surfaces with gradient features by Yahua Liu, Gene Whyman, Edward Bormashenko, Chonglei Hao, and Zuankai Wang. Appl. Phys. Lett. 107, 051604 (2015); http://dx.doi.org/10.1063/1.4927055

This paper appears to be open access.

Finally, here’s an illustration of the pancake bounce,

Droplet hitting tapered posts shows “pancake” bouncing characterized by lifting off the surface of the end of spreading without retraction. Credit- Z.Wang/HKU

Droplet hitting tapered posts shows “pancake” bouncing characterized by lifting off the surface of the end of spreading without retraction. Credit- Z.Wang/HKU

There is also a pancake bounce video which you can view here on EurekAlert.

Single molecule nanogold-based probe for photoacoustic Imaging and SERS biosensing

As I understand it, the big deal is that A*STAR (Singapore’s Agency for Science, Rechnology and Research) scientists have found a way to make a single molecule probe do the work of a two-molecule probe when imaging tumours. From a July 29, 2015 news item on Nanowerk (Note: A link has been removed),

An organic dye that can light up cancer cells for two powerful imaging techniques providing complementary diagnostic information has been developed and successfully tested in mice by A*STAR researchers (“Single Molecule with Dual Function on Nanogold: Biofunctionalized Construct for In Vivo Photoacoustic Imaging and SERS Biosensing”).

A July 29, 2015 A*STAR news release, which originated the news item, describes the currently used multimodal imaging technique and provides details about the new single molecule technique,

Imaging tumors is vitally important for cancer research, but each imaging technique has its own limitations for studying cancer in living organisms. To overcome the limitations of individual techniques, researchers typically employ a combination of various imaging methods — a practice known as multimodal imaging. In this way, they can obtain complementary information and hence a more complete picture of cancer.

Two very effective methods for imaging tumors are photoacoustic imaging and surface-enhanced Raman scattering (SERS). Photoacoustic imaging can image deep tissue with a good resolution, whereas SERS detects miniscule amounts of a target molecule. To simultaneously use both photoacoustic imaging and SERS, a probe must produce signals for both imaging modalities.

In multimodal imaging, researchers typically combine probes for each imaging modality into a single two-molecule probe. However, the teams of Malini Olivo at the A*STAR Singapore Bioimaging Consortium and Bin Liu at the A*STAR Institute of Materials Research and Engineering, along with overseas collaborator Ben Zhong Tang from the Hong Kong University of Science and Technology, adopted a different approach — they developed single-molecule probes that can be used for both photoacoustic imaging and SERS. The probes are based on organic cyanine dyes that absorb near-infrared light, which has the advantage of being able to deeply penetrate tissue, enabling tumors deep within the body to be imaged.

Once the team had verified that the probes worked for both imaging modalities, they optimized the performances of the probes by adding gold nanoparticles to them to amplify the SERS signal and by encapsulating them in the polymer polyethylene glycol to stabilize their structures.

The researchers then deployed these optimized probes in live mice. By functionalizing the probes with an antibody that recognizes a tumor cell-surface protein, they were able to use them to target tumors. The scientists found that, in photoacoustic imaging, the tumor-targeted probes produced signals that were roughly three times stronger than those of unmodified probes. Using SERS, the team was also able to monitor the concentrations of the probes in the tumor, spleen and liver in real time with a high degree of sensitivity.

U. S. Dinish, a senior scientist in Olivo’s group, recalls the team’s “surprise at the sensitivity and potential of the nanoconstruct.” He anticipates that the probe could be used to guide surgical removal of tumors.

Here’s a link to and a citation for the paper,

Single Molecule with Dual Function on Nanogold: Biofunctionalized Construct for In Vivo Photoacoustic Imaging and SERS Biosensing by U. S. Dinish, Zhegang Song, Chris Jun Hui Ho, Ghayathri Balasundaram, Amalina Binte Ebrahim Attia, Xianmao Lu, Ben Zhong Tang, Bin Liu, and Malini Olivo. Advanced Functional Materials, Vol 25 Issue 15
pages 2316–2325, April 15, 2015 DOI: 10.1002/adfm.201404341 Article first published online: 11 MAR 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Hong Kong, MosquitNo, and Dengue fever

The most substantive piece I’ve written on dengue fever and a nanotechnology-enabled approach to the problem was a 2013 post explaining why the fever is of such concern, which also included information about a proposed therapeutic intervention by Nanoviricides. From the July 2, 2013 posting, here’s more about the magnitude of the problem,

… the WHO (World Health Organization) fact sheet no. 117,

The incidence of dengue has grown dramatically around the world in recent decades. Over 2.5 billion people – over 40% of the world’s population – are now at risk from dengue. WHO currently estimates there may be 50–100 million dengue infections worldwide every year.

Before 1970, only nine countries had experienced severe dengue epidemics. The disease is now endemic in more than 100 countries in Africa, the Americas, the Eastern Mediterranean, South-east Asia and the Western Pacific. The American, South-east Asia and the Western Pacific regions are the most seriously affected.

Cases across the Americas, South-east Asia and Western Pacific have exceeded 1.2 million cases in 2008 and over 2.3 million in 2010 (based on official data submitted by Member States). Recently the number of reported cases has continued to increase. In 2010, 1.6 million cases of dengue were reported in the Americas alone, of which 49 000 cases were severe dengue.

Not only is the number of cases increasing as the disease spreads to new areas, but explosive outbreaks are occurring. The threat of a possible outbreak of dengue fever now exists in Europe and local transmission of dengue was reported for the first time in France and Croatia in 2010 and imported cases were detected in three other European countries. A recent (2012) outbreak of dengue on Madeira islands of Portugal has resulted in over 1800 cases and imported cases were detected in five other countries in Europe apart from mainland Portugal.

An estimated 500 000 people with severe dengue require hospitalization each year, a large proportion of whom are children. About 2.5% of those affected die.

Fast forwarding to 2015, this latest information about dengue fever features a preventative approach being taken in Hong Kong according to a July 5, 2015 article by Timmy Sung  for the South China Morning Post,

Dutch insect repellent innovator Mosquitno targets Hong Kong as dengue fever cases rise

A Dutch company says it has invented an insect repellent using nanotechnology which can keep clothes and homes mosquito-free for up to three months.

Mosquitno has been invited by a government body to begin trading in Hong Kong as the number of cases reported in the city of the deadly mosquito-borne dengue fever rises.

The new repellent does not include the active ingredient used in many insect repellents, DEET, which has question marks surrounding its safety.

Figures from the Department of Health show the number of dengue fever cases reported rose 8 per cent last year, to 112. There were 34 cases in the first five months of this year, 36 per cent more than in the same period last year. Mosquitoes are most active in the summer months.

MosquitNo does use an ingredient, IR3535, which has caused concern (from Sung’s article),

The Consumer Council has previously warned that IR3535-based mosquito repellents can break down plastic materials and certain synthetic fibres, but Wijnen [Erwin Wijnen, director of the {Mosqutino’s} brand development and global travel retailing] said the ingredient combined with nanotechnology is safe and there was no possibility it would damage clothes.

I was not able to find out more about the company’s nanotechnology solution as applied to MosquitNo,

The NANO Series is a revolutionary, innovative technology designed by scientists especially for MosquitNo. This line utilizes this-breaking insect repellent technology in various products including wipes, textile spray, fabric softener and bracelets. This technology and our trendy applications are truly industry-changing and MosquitNo is at the leading edge!

The active component in all our awesome products within this range is IR3535.

That’s it for technical detail. At least, for now.

Dreaming of the perfect face mask?

Researchers at Hong Kong Polytechnic University have something for anyone who has ever dreamed of getting a face mask that offers protection from the finest of pollutant particles, according to a May 13, 2014 news item on phys.org,

Researchers at the Hong Kong Polytechnic University have developed a ground-breaking filter technology that guards against the finest pollutants in the air

Haze is usually composed of pollutants in the form of tiny suspended particles or fine mists/droplets emitted from vehicles, coal-burning power plants and factories. Continued exposure increases the risk of developing respiratory problems, heart diseases and lung cancer. Can we avoid the unhealthy air?

A simple face mask that can block out suspended particles has been developed by scientists from the Department of Mechanical Engineering at the Hong Kong Polytechnic University (PolyU). The project is led by Professor Wallace Woon-Fong Leung, a renowned filtration expert, who has spent his career understanding these invisible killers.

An article for Hong Kong Polytechnic University’s April 2014 issue of Technology Frontiers, which originated the news item, describes the research problem and Professor Leung’s proposed face mask in more detail,

In Hong Kong, suspended particles PM 10 and PM 2.5 are being monitored.  PM 10 refers to particles that are 10 microns (or micrometres) in size or smaller, whereas PM 2.5 measures 2.5 microns or smaller.  At the forefront of combating air pollution, Professor Leung targets ultra-fine pollutants that have yet been picked up by air quality monitors – particles measuring 1 micron or below, which he perceived to be a more important threat to human health.

“In my view, nano-aerosols (colloid of fine solid particles or liquid droplets of sub-micron to nano-sizes), such as diesel emissions, are the most lethal for three reasons.  First, they are in their abundance by number suspended in the air.  Second, they are too small to be filtered out using current technologies.  Third, they can pass easily through our lungs and work their way into our respiratory systems, and subsequently our vascular, nervous and lymphatic systems, doing the worst kind of harm.”

However, it would be difficult to breathe through the mask if it were required to block out nano-aerosols.  To make an effective filter that is highly breathable, a new filter that provides high filtration efficiency yet low air resistance (or low pressure drop) is required.

According to Professor Leung, pollutant particles get into our body in two ways – by the airflow carrying them and by the diffusion motion of these tiny particles.  As the particles are intercepted by the fibres of the mask, they are filtered out before reaching our lungs.

Fibres from natural or synthetic materials can be made into nanofibres around 1/500 of the diameter of a hair (about 0.1 mm) through nanotechnologies.  While nanofibres increase the surface area for nano-aerosol interception, they also incur larger air resistance.  Professor Leung’s new innovation aims to divide optimal amount of nanofibres into multiple layers separated by a permeable space, allowing plenty of room for air to pass through.

A conventional face mask can only block out about 25% of 0.3-micron nano-aerosols under standard test conditions.  Professor Leung said, “The multi-layer nanofibre mask can block out at least 80% of suspended nano-aerosols, even the ones smaller than 0.3 micron.  In the meantime, the wearer can breathe as comfortably as wearing a conventional face mask, making it superb for any outdoor occasions. Another option is to provide a nanofiber mask that has the same capture efficiency as conventional face mask, yet it is at least several times more breathable, which would be suitable for the working group.”

The new filtration technology has been well recognized.  Recently, Professor Leung and his team have won a Gold Medal and a Special Merit Award from the Romania Ministry of National Education at the 42nd International Exhibition of Inventions of Geneva held in Switzerland.

If the breakthrough is turned into tightly-fit surgical masks, they are just as effective against bacteria and viruses whose sizes are under 1 micron.  “In the future, medical professionals at the frontline can have stronger protection against deadly bacteria and viruses,” added Professor Leung.

I did not find any published research about this proposed face mask but there is a 2009 patent for a Multilayer nanofiber filter (US 8523971 B2), which lists the inventors as: Wallace Woon-Fong Leung and Chi Ho Hung and the original assignee as: The Hong Kong Polytechnic University.  The description of the materials in the patent closely resembles the description of the face mask materials.