Tag Archives: University of Central Florida

Dealing with mosquitos: a robot story and an engineered human tissue story

I have two ‘mosquito and disease’ stories, the first concerning dengue fever and the second, malaria.

Dengue fever in Taiwan

A June 8, 2023 news item on phys.org features robotic vehicles, dengue fever, and mosquitoes,

Unmanned ground vehicles can be used to identify and eliminate the breeding sources of mosquitos that carry dengue fever in urban areas, according to a new study published in PLOS Neglected Tropical Diseases by Wei-Liang Liu of the Taiwan National Mosquito-Borne Diseases Control Research Center, and colleagues.

It turns out sewers are a problem according to this June 8, 2023 PLOS (Public Library of Science) news release on EurekAlert, provides more context and detail,

Dengue fever is an infectious disease caused by the dengue virus and spread by several mosquito species in the genus Aedes, which also spread chikungunya, yellow fever and zika. Through the process of urbanization, sewers have become easy breeding grounds for Aedes mosquitos and most current mosquito monitoring programs struggle to monitor and analyze the density of mosquitos in these hidden areas.

In the new control effort, researchers combined a crawling robot, wire-controlled cable car and real-time monitoring system into an unmanned ground vehicle system (UGV) that can take high-resolution, real-time images of areas within sewers. From May to August 2018, the system was deployed in five administrative districts in Kaohsiung city, Taiwan, with covered roadside sewer ditches suspected to be hotspots for mosquitos. Mosquito gravitraps were places above the sewers to monitor effects of the UGV intervention on adult mosquitos in the area.

In 20.7% of inspected sewers, the system found traces of Aedes mosquitos in stages from larvae to adult. In positive sewers, additional prevention control measures were carried out, using either insecticides or high-temperature water jets.  Immediately after these interventions, the gravitrap index (GI)—  a measure of the adult mosquito density nearby— dropped significantly from 0.62 to 0.19.

“The widespread use of UGVs can potentially eliminate some of the breeding sources of vector mosquitoes, thereby reducing the annual prevalence of dengue fever in Kaohsiung city,” the authors say.

Here’s a link to and a citation for the paper,

Use of unmanned ground vehicle systems in urbanized zones: A study of vector Mosquito surveillance in Kaohsiung by Yu-Xuan Chen, Chao-Ying Pan, Bo-Yu Chen, Shu-Wen Jeng, Chun-Hong Chen, Joh-Jong Huang, Chaur-Dong Chen, Wei-Liang Liu. PLOS Neglected Tropical Diseases DOI: https://doi.org/10.1371/journal.pntd.0011346 Published: June 8, 2023

This paper is open access.

Dengue on the rise

Like many diseases, dengue is one where you may not have symptoms (asymptomatic), or they’re relatively mild and can be handled at home, or you may need care in a hospital and, in some cases, it can be fatal.

The World Health Organization (WHO) notes that dengue fever cases have increased exponentially since 2000 (from the March 17, 2023 version of the WHO’s “Dengue and severe dengue” fact sheet),

Global burden

The incidence of dengue has grown dramatically around the world in recent decades, with cases reported to WHO increased from 505 430 cases in 2000 to 5.2 million in 2019. A vast majority of cases are asymptomatic or mild and self-managed, and hence the actual numbers of dengue cases are under-reported. Many cases are also misdiagnosed as other febrile illnesses (1).

One modelling estimate indicates 390 million dengue virus infections per year of which 96 million manifest clinically (2). Another study on the prevalence of dengue estimates that 3.9 billion people are at risk of infection with dengue viruses.

The disease is now endemic in more than 100 countries in the WHO Regions of Africa, the Americas, the Eastern Mediterranean, South-East Asia and the Western Pacific. The Americas, South-East Asia and Western Pacific regions are the most seriously affected, with Asia representing around 70% of the global disease burden.

Dengue is spreading to new areas including Europe, [emphasis mine] and explosive outbreaks are occurring. Local transmission was reported for the first time in France and Croatia in 2010 [emphasis mine] and imported cases were detected in 3 other European countries.

The largest number of dengue cases ever reported globally was in 2019. All regions were affected, and dengue transmission was recorded in Afghanistan for the first time. The American Region reported 3.1 million cases, with more than 25 000 classified as severe. A high number of cases were reported in Bangladesh (101 000), Malaysia (131 000) Philippines (420 000), Vietnam (320 000) in Asia.

Dengue continues to affect Brazil, Colombia, the Cook Islands, Fiji, India, Kenya, Paraguay, Peru, the Philippines, the Reunion Islands and Vietnam as of 2021. 

There’s information from an earlier version of the fact sheet, in my July 2, 2013 posting, highlighting different aspects of the disease, e.g., “About 2.5% of those affected die.”

A July 21, 2023 United Nations press release warns that the danger from mosquitoes spreading dengue fever could increase along with the temperature,

Global warming marked by higher average temperatures, precipitation and longer periods of drought, could prompt a record number of dengue infections worldwide, the World Health Organization (WHO) warned on Friday [July 21, 2023].

Despite the absence of mosquitoes infected with the dengue virus in Canada, the government has a Dengue fever information page. At this point, the concern is likely focused on travelers who’ve contracted the disease from elsewhere. However, I am guessing that researchers are keeping a close eye on Canadian mosquitoes as these situations can change.

Malaria in Florida (US)

The researchers from the University of Central Florida (UCF) couldn’t have known when they began their project to study mosquito bites and disease that Florida would register its first malaria cases in 20 years this summer, from a July 26, 2023 article by Stephanie Colombini for NPR ([US] National Public Radio), Note: Links have been removed,

First local transmission in U.S. in 20 years

Heath [Hannah Heath] is one of eight known people in recent months who have contracted malaria in the U.S., after being bitten by a local mosquito, rather than while traveling abroad. The cases comprise the nation’s first locally transmitted outbreak in 20 years. The last time this occurred was in 2003, when eight people tested positive for malaria in Palm Beach, Fla.

One of the eight cases is in Texas; the rest occurred in the northern part of Sarasota County.

The Florida Department of Health recorded the most recent case in its weekly arbovirus report for July 9-15 [2023].

For the past month, health officials have issued a mosquito-borne illness alert for residents in Sarasota and neighboring Manatee County. Mosquito management teams are working to suppress the population of the type of mosquito that carries malaria, Anopheles.

Sarasota Memorial Hospital has treated five of the county’s seven malaria patients, according to Dr. Manuel Gordillo, director of infection control.

“The cases that are coming in are classic malaria, you know they come in with fever, body aches, headaches, nausea, vomiting, diarrhea,” Gordillo said, explaining that his hospital usually treats just one or two patients a year who acquire malaria while traveling abroad in Central or South America, or Africa.

All the locally acquired cases were of Plasmodium vivax malaria, a strain that typically produces milder symptoms or can even be asymptomatic, according to the Centers for Disease Control and Prevention. But the strain can still cause death, and pregnant people and children are particularly vulnerable.

Malaria does not spread from human-to-human contact; a mosquito carrying the disease has to bite someone to transmit the parasites.

Workers with Sarasota County Mosquito Management Services have been especially busy since May 26 [2023], when the first local case was confirmed.

Like similar departments across Florida, the team is experienced in responding to small outbreaks of mosquito-borne illnesses such as West Nile virus or dengue. They have protocols for addressing travel-related cases of malaria as well, but have ramped up their efforts now that they have confirmation that transmission is occurring locally between mosquitoes and humans.

While organizations like the World Health Organization have cautioned climate change could lead to more global cases and deaths from malaria and other mosquito-borne diseases, experts say it’s too soon to tell if the local transmission seen these past two months has any connection to extreme heat or flooding.

“We don’t have any reason to think that climate change has contributed to these particular cases,” said Ben Beard, deputy director of the CDC’s US Centers for Disease Control and Prevention] division of vector-borne diseases and deputy incident manager for this year’s local malaria response.

“In a more general sense though, milder winters, earlier springs, warmer, longer summers – all of those things sort of translate into mosquitoes coming out earlier, getting their replication cycles sooner, going through those cycles faster and being out longer,” he said. And so we are concerned about the impact of climate change and environmental change in general on what we call vector-borne diseases.”.

Beard co-authored a 2019 report that highlights a significant increase in diseases spread by ticks and mosquitoes in recent decades. Lyme disease and West Nile virus were among the top five most prevalent.

“In the big picture it’s a very significant concern that we have,” he said.

Engineered tissue and bloodthirsty mosquitoes

A June 8, 2023 University of Central Florida (UCF) news release (also on EurekAlert) by Eric Eraso describes the research into engineered human tissue and features a ‘bloodthirsty’ video. First, the video,

Note: A link has been removed,

A UCF research team has engineered tissue with human cells that mosquitoes love to bite and feed upon — with the goal of helping fight deadly diseases transmitted by the biting insects.

A multidisciplinary team led by College of Medicine biomedical researcher Bradley Jay Willenberg with Mollie Jewett (UCF Burnett School of Biomedical Sciences) and Andrew Dickerson (University of Tennessee) lined 3D capillary gel biomaterials with human cells to create engineered tissue and then infused it with blood. Testing showed mosquitoes readily bite and blood feed on the constructs. Scientists hope to use this new platform to study how pathogens that mosquitoes carry impact and infect human cells and tissues. Presently, researchers rely largely upon animal models and cells cultured on flat dishes for such investigations.

Further, the new system holds great promise for blood feeding mosquito species that have proven difficult to rear and maintain as colonies in the laboratory, an important practical application. The Willenberg team’s work was published Friday in the journal Insects.

Mosquitos have often been called the world’s deadliest animal, as vector-borne illnesses, including those from mosquitos cause more than 700,000 deaths worldwide each year. Malaria, dengue, Zika virus and West Nile virus are all transmitted by mosquitos. Even for those who survive these illnesses, many are left suffering from organ failure, seizures and serious neurological impacts.

“Many people get sick with mosquito-borne illnesses every year, including in the United States. The toll of such diseases can be especially devastating for many countries around the world,” Willenberg says.

This worldwide impact of mosquito-borne disease is what drives Willenberg, whose lab employs a unique blend of biomedical engineering, biomaterials, tissue engineering, nanotechnology and vector biology to develop innovative mosquito surveillance, control and research tools. He said he hopes to adapt his new platform for application to other vectors such as ticks, which spread Lyme disease.

“We have demonstrated the initial proof-of-concept with this prototype” he says. “I think there are many potential ways to use this technology.”

Captured on video, Willenberg observed mosquitoes enthusiastically blood feeding from the engineered tissue, much as they would from a human host. This demonstration represents the achievement of a critical milestone for the technology: ensuring the tissue constructs were appetizing to the mosquitoes.

“As one of my mentors shared with me long ago, the goal of physicians and biomedical researchers is to help reduce human suffering,” he says. “So, if we can provide something that helps us learn about mosquitoes, intervene with diseases and, in some way, keep mosquitoes away from people, I think that is a positive.”

Willenberg came up with the engineered tissue idea when he learned the National Institutes of Health (NIH) was looking for new in vitro 3D models that could help study pathogens that mosquitoes and other biting arthropods carry.

“When I read about the NIH seeking these models, it got me thinking that maybe there is a way to get the mosquitoes to bite and blood feed [on the 3D models] directly,” he says. “Then I can bring in the mosquito to do the natural delivery and create a complete vector-host-pathogen interface model to study it all together.”

As this platform is still in its early stages, Willenberg wants to incorporate addition types of cells to move the system closer to human skin. He is also developing collaborations with experts that study pathogens and work with infected vectors, and is working with mosquito control organizations to see how they can use the technology.

“I have a particular vision for this platform, and am going after it. My experience too is that other good ideas and research directions will flourish when it gets into the hands of others,” he says. “At the end of the day, the collective ideas and efforts of the various research communities propel a system like ours to its full potential. So, if we can provide them tools to enable their work, while also moving ours forward at the same time, that is really exciting.”

Willenberg received his Ph.D. in biomedical engineering from the University of Florida and continued there for his postdoctoral training and then in scientist, adjunct scientist and lecturer positions. He joined the UCF College of Medicine in 2014, where he is currently an assistant professor of medicine.

Willenberg is also a co-founder, co-owner and manager of Saisijin Biotech, LLC and has a minor ownership stake in Sustained Release Technologies, Inc. Neither entity was involved in any way with the work presented in this story. Team members may also be listed as inventors on patent/patent applications that could result in royalty payments. This technology is available for licensing. To learn more, please visit ucf.flintbox.com/technologies/44c06966-2748-4c14-87d7-fc40cbb4f2c6.

Here’s a link to and a citation for the paper,

Engineered Human Tissue as A New Platform for Mosquito Bite-Site Biology Investigations by Corey E. Seavey, Mona Doshi, Andrew P. Panarello, Michael A. Felice, Andrew K. Dickerson, Mollie W. Jewett and Bradley J. Willenberg. Insects 2023, 14(6), 514; https://doi.org/10.3390/insects14060514 Published: 2 June 2023

This paper is open access.

That final paragraph in the news release is new to me. I’ve seen them list companies where the researchers have financial interests but this is the first time I’ve seen a news release that offers a statement attempting to cover all the bases including some future possibilities such as: “Team members may also be listed as inventors on patent/patent applications that could result in royalty payments.

It seems pretty clear that there’s increasing concern about mosquito-borne diseases no matter where you live.

A structural colour solution for energy-saving paint (thank the butterflies)

The UCF-developed plasmonic paint uses nanoscale structural arrangement of colorless materials — aluminum and aluminum oxide — instead of pigments to create colors. Here the plasmonic paint is applied to the wings of metal butterflies, the insect that inspired the research. Credit: University of Central Florida

A March 9, 2023 news item on Nanowerk announces research into multicolour energy-saving coating/paint, so, this is a structural colour story, Note: Links have been removed,

University of Central Florida researcher Debashis Chanda, a professor in UCF’s NanoScience Technology Center, has drawn inspiration from butterflies to create the first environmentally friendly, large-scale and multicolor alternative to pigment-based colorants, which can contribute to energy-saving efforts and help reduce global warming.

A March 8, 2023 University of Central Florida (UCF) news release (also on EurekAlert) by Katrina Cabansay, which originated the news item, provides more context and more details,

“The range of colors and hues in the natural world are astonishing — from colorful flowers, birds and butterflies to underwater creatures like fish and cephalopods,” Chanda says. “Structural color serves as the primary color-generating mechanism in several extremely vivid species where geometrical arrangement of typically two colorless materials produces all colors. On the other hand, with manmade pigment, new molecules are needed for every color present.”

Based on such bio-inspirations, Chanda’s research group innovated a plasmonic paint, which utilizes nanoscale structural arrangement of colorless materials — aluminum and aluminum oxide — instead of pigments to create colors.

While pigment colorants control light absorption based on the electronic property of the pigment material and hence every color needs a new molecule, structural colorants control the way light is reflected, scattered or absorbed based purely on the geometrical arrangement of nanostructures.

Such structural colors are environmentally friendly as they only use metals and oxides, unlike present pigment-based colors that use artificially synthesized molecules.

The researchers have combined their structural color flakes with a commercial binder to form long-lasting paints of all colors.

“Normal color fades because pigment loses its ability to absorb photons,” Chanda says. “Here, we’re not limited by that phenomenon. Once we paint something with structural color, it should stay for centuries.”

Additionally, because plasmonic paint reflects the entire infrared spectrum, less heat is absorbed by the paint, resulting in the underneath surface staying 25 to 30 degrees Fahrenheit cooler than it would if it were covered with standard commercial paint, the researcher says.

“Over 10% of total electricity in the U.S. goes toward air conditioner usage,” Chanda says. “The temperature difference plasmonic paint promises would lead to significant energy savings. Using less electricity for cooling would also cut down carbon dioxide emissions, lessening global warming.”

Plasmonic paint is also extremely lightweight, the researcher says.

This is due to the paint’s large area-to-thickness ratio, with full coloration achieved at a paint thickness of only 150 nanometers, making it the lightest paint in the world, Chanda says.

The paint is so lightweight that only about 3 pounds of plasmonic paint could cover a Boeing 747, which normally requires more than 1,000 pounds of conventional paint, he says.

Chanda says his interest in structural color stems from the vibrancy of butterflies.

“As a kid, I always wanted to build a butterfly,” he says. “Color draws my interest.”

Future Research

Chanda says the next steps of the project include further exploration of the paint’s energy-saving aspects to improve its viability as commercial paint.

“The conventional pigment paint is made in big facilities where they can make hundreds of gallons of paint,” he says. “At this moment, unless we go through the scale-up process, it is still expensive to produce at an academic lab.”

“We need to bring something different like, non-toxicity, cooling effect, ultralight weight, to the table that other conventional paints can’t.” Chanda says.

Licensing Opportunity

For more information about licensing this technology, please visit the Inorganic Paint Pigment for Vivid Plasmonic Color technology sheet.

Researcher’s Credentials

Chanda has joint appointments in UCF’s NanoScience Technology Center, Department of Physics and College of Optics and Photonics. He received his doctorate in photonics from the University of Toronto and worked as a postdoctoral fellow at the University of Illinois at Urbana-Champaign. He joined UCF in Fall 2012.

Here’s a link to and a citation for the paper,

Ultralight plasmonic structural color paint by Pablo Cencillo-Abad, Daniel Franklin, Pamela Mastranzo-Ortega, Javier Sanchez-Mondragon, and Debashis Chanda. Science Advances 8 Mar 2023 Vol 9, Issue 10 DOI: 10.1126/sciadv.adf7207

This paper is open access.

Here’s the researcher with one of ‘his butterflies’ (I may be reading a little too much into this but it looks like he’s uncomfortable having his photo taken but game to do it for work that he’s proud of),

Caption: Debashis Chanda, a professor in UCF’s NanoScience Technology Center, drew inspiration from butterflies to create the innovative new plasmonic paint, shown here applied to metal butterfly wings. Credit: University of Central Florida

Dynamic molecular switches for brainlike computing at the University of Limerick

Aren’t memristors proof that brainlike computing at the molecular and atomic levels is possible? It seems I have misunderstood memristors according to this November 21, 2022 news item on ScienceDaily,

A breakthrough discovery at University of Limerick in Ireland has revealed for the first time that unconventional brain-like computing at the tiniest scale of atoms and molecules is possible.

Researchers at University of Limerick’s Bernal Institute worked with an international team of scientists to create a new type of organic material that learns from its past behaviour.

The discovery of the ‘dynamic molecular switch’ that emulate[s] synaptic behaviour is revealed in a new study in the international journal Nature Materials.

The study was led by Damien Thompson, Professor of Molecular Modelling in UL’s Department of Physics and Director of SSPC, the UL-hosted Science Foundation Ireland Research Centre for Pharmaceuticals, together with Christian Nijhuis at the Centre for Molecules and Brain-Inspired Nano Systems in University of Twente [Netherlands] and Enrique del Barco from University of Central Florida.

A November 21, 2022 University of Limerick press release (also on EurekAlert), which originated the news item, provides more technical details about the research,

Working during lockdowns, the team developed a two-nanometre thick layer of molecules, which is 50,000 times thinner than a strand of hair and remembers its history as electrons pass through it.

Professor Thompson explained that the “switching probability and the values of the on/off states continually change in the molecular material, which provides a disruptive new alternative to conventional silicon-based digital switches that can only ever be either on or off”.

The newly discovered dynamic organic switch displays all the mathematical logic functions necessary for deep learning, successfully emulating Pavlovian ‘call and response’ synaptic brain-like behaviour.

The researchers demonstrated the new materials properties using extensive experimental characterisation and electrical measurements supported by multi-scale modelling spanning from predictive modelling of the molecular structures at the quantum level to analytical mathematical modelling of the electrical data.

To emulate the dynamical behaviour of synapses at the molecular level, the researchers combined fast electron transfer (akin to action potentials and fast depolarization processes in biology) with slow proton coupling limited by diffusion (akin to the role of biological calcium ions or neurotransmitters).

Since the electron transfer and proton coupling steps inside the material occur at very different time scales, the transformation can emulate the plastic behaviour of synapse neuronal junctions, Pavlovian learning, and all logic gates for digital circuits, simply by changing the applied voltage and the duration of voltage pulses during the synthesis, they explained.

“This was a great lockdown project, with Chris, Enrique and I pushing each other through zoom meetings and gargantuan email threads to bring our teams combined skills in materials modelling, synthesis and characterisation to the point where we could demonstrate these new brain-like computing properties,” explained Professor Thompson.

“The community has long known that silicon technology works completely differently to how our brains work and so we used new types of electronic materials based on soft molecules to emulate brain-like computing networks.”

The researchers explained that the method can in the future be applied to dynamic molecular systems driven by other stimuli such as light and coupled to different types of dynamic covalent bond formation.

This breakthrough opens up a whole new range of adaptive and reconfigurable systems, creating new opportunities in sustainable and green chemistry, from more efficient flow chemistry production of drug products and other value-added chemicals to development of new organic materials for high density computing and memory storage in big data centres.

“This is just the start. We are already busy expanding this next generation of intelligent molecular materials, which is enabling development of sustainable alternative technologies to tackle grand challenges in energy, environment, and health,” explained Professor Thompson.

Professor Norelee Kennedy, Vice President Research at UL, said: “Our researchers are continuously finding new ways of making more effective, more sustainable materials. This latest finding is very exciting, demonstrating the reach and ambition of our international collaborations and showcasing our world-leading ability at UL to encode useful properties into organic materials.”

Here’s a link to and a citation for the paper,

Dynamic molecular switches with hysteretic negative differential conductance emulating synaptic behaviour by Yulong Wang, Qian Zhang, Hippolyte P. A. G. Astier, Cameron Nickle, Saurabh Soni, Fuad A. Alami, Alessandro Borrini, Ziyu Zhang, Christian Honnigfort, Björn Braunschweig, Andrea Leoncini, Dong-Cheng Qi, Yingmei Han, Enrique del Barco, Damien Thompson & Christian A. Nijhuis. Nature Materials volume 21, pages 1403–1411 (2022) DOI: https://doi.org/10.1038/s41563-022-01402-2 Published: 21 November 2022 Issue Date: December 2022

This paper is behind a paywall.

Branched flows of light look like trees say “explorers of experimental science” at Technion

Enhancing soap bubbles for your science explorations? It sounds like an entertaining activity you might give children for ‘painless’ science education. In this case, researchers at Technion – Israel Institute of Technology have made an exciting discovery, The following video is where I got the phrase “explorers of experimental science,”

A July 1, 2020 news item on Nanowerk announces the work (Note: A link has been removed),

A team of researchers from the Technion – Israel Institute of Technology has observed branched flow of light for the very first time. The findings are published in Nature and are featured on the cover of the July 2, 2020 issue (“Observation of branched flow of light”).

The study was carried out by Ph.D. student Anatoly (Tolik) Patsyk, in collaboration with Miguel A. Bandres, who was a postdoctoral fellow at Technion when the project started and is now an Assistant Professor at CREOL, College of Optics and Photonics, University of Central Florida. The research was led by Technion President Professor Uri Sivan and Distinguished Professor Mordechai (Moti) Segev of the Technion’s Physics and Electrical Engineering Faculties, the Solid State Institute, and the Russell Berrie Nanotechnology Institute.

A July 2, 2020 Technion press release, which originated the news item, delves further into the research,

When waves travel through landscapes that contain disturbances, they naturally scatter, often in all directions. Scattering of light is a natural phenomenon, found in many places in nature. For example, the scattering of light is the reason for the blue color of the sky. As it turns out, when the length over which disturbances vary is much larger than the wavelength, the wave scatters in an unusual fashion: it forms channels (branches) of enhanced intensity that continue to divide or branch out, as the wave propagates.  This phenomenon is known as branched flow. It was first observed in 2001 in electrons and had been suggested to be ubiquitous and occur also for all waves in nature, for example – sound waves and even ocean waves. Now, Technion researchers are bringing branched flow to the domain of light: they have made an experimental observation of the branched flow of light.

“We always had the intention of finding something new, and we were eager to find it. It was not what we started looking for, but we kept looking and we found something far better,” says Asst. Prof. Miguel Bandres. “We are familiar with the fact that waves spread when they propagate in a homogeneous medium. But for other kinds of mediums, waves can behave in very different ways. When we have a disordered medium where the variations are not random but smooth, like a landscape of mountains and valleys, the waves will propagate in a peculiar way. They will form channels that keep dividing as the wave propagates, forming a beautiful pattern resembling the branches of a tree.” 

In their research, the team coupled a laser beam to a soap membrane, which contains random variations in membrane thickness. They discovered that when light propagates within the soap film, rather than being scattered, the light forms elongated branches, creating the branched flow phenomenon for light.

“In optics we usually work hard to make light stay focused and propagate as a collimated beam, but here the surprise is that the random structure of the soap film naturally caused the light to stay focused. It is another one of nature’s surprises,” says Tolik Patsyk. 

The ability to create branched flow in the field of optics offers new and exciting opportunities for investigating and understanding this universal wave phenomenon.

“There is nothing more exciting than discovering something new and this is the first demonstration of this phenomenon with light waves,” says Technion President Prof. Uri Sivan. “This goes to show that intriguing phenomena can also be observed in simple systems and one just has to be perceptive enough to uncover them. As such, bringing together and combining the views of researchers from different backgrounds and disciplines has led to some truly interesting insights.”

“The fact that we observe it with light waves opens remarkable new possibilities for research, starting with the fact that we can characterize the medium in which light propagates to very high precision and the fact that we can also follow those branches accurately and study their properties,” he adds.

Distinguished Prof. Moti Segev looks to the future. “I always educate my team to think beyond the horizon,” he says, “to think about something new, and at the same time – look at the experimental facts as they are, rather than try to adapt the experiments to meet some expected behavior. Here, Tolik was trying to measure something completely different and was surprised to see these light branches which he could not initially explain. He asked Miguel to join in the experiments, and together they upgraded the experiments considerably – to the level they could isolate the physics involved. That is when we started to understand what we see. It took more than a year until we understood that what we have is the strange phenomenon of “branched flow”, which at the time was never considered in the context of light waves. Now, with this observation – we can think of a plethora of new ideas. For example, using these light branches to control the fluidic flow in liquid, or to combine the soap with fluorescent material and cause the branches to become little lasers. Or to use the soap membranes as a platform for exploring fundamentals of waves, such as the transitions from ordinary scattering which is always diffusive, to branched flow, and subsequently to Anderson localization. There are many ways to continue this pioneering study. As we did many times in the past, we would like to boldly go where no one has gone before.” 

The project is now continuing in the laboratories of Profs. Segev and Sivan at Technion, and in parallel in the newly established lab of Prof. Miguel Bandres at UCF. 

Here’s a link to and a citation for the paper,

Observation of branched flow of light by Anatoly Patsyk, Uri Sivan, Mordechai Segev & Miguel A. Bandres Nature volume 583, pages60–65 (2020) DOI: https://doi.org/10.1038/s41586-020-2376-8 Published: 01 July 2020 Issue Date: 02 July 2020

This paper is behind a paywall.

Human-on-a-chip predicts in vivo results based on in vitro model … for the first time

If successful the hope is that ‘human-on-a-chip’ will replace most, if not all, animal testing. This July 3, 2019 Hesperos news release (also on EurekAlert) suggests scientists are making serious gains in the drive to replace animal testing (Note: For anyone having difficulty with the terms, pharmacokinetics and pharmacodynamics, there are definitions towards the end of this posting, which may prove helpful),

Hesperos Inc., pioneers* of the “human-on-a-chip” in vitro system has announced the use of its innovative multi-organ model to successfully measure the concentration and metabolism of two known cardiotoxic small molecules over time, to accurately describe the drug behavior and toxic effects in vivo. The findings further support the potential of body-on-a-chip systems to transform the drug discovery process.

In a study published in Nature Scientific Reports, in collaboration with AstraZeneca, Hesperos described how they used a pumpless heart model and a heart:liver system to evaluate the temporal pharmacokinetic/pharmacodynamic (PKPD) relationship for terfenadine, an antihistamine that was banned due to toxic cardiac effects, as well as determine its mechanism of toxicity.

The study found there was a time-dependent, drug-induced response in the heart model. Further experiments were conducted, adding a metabolically competent liver module to the Hesperos Human-on-a-Chip® system to observe what happened when terfenadine was converted to fexofenadine. By doing so, the researchers were able to determine the driver of the pharmacodynamic (PD) effect and develop a mathematical model to predict the effect of terfenadine in preclinical species. This is the first time an in vitro human-on-a-chip system has been shown to predict in vivo outcomes, which could be used to predict clinical trial outcomes in the future.

“The ability to examine PKPD relationships in vitro would enable us to understand compound behavior prior to in vivo testing, offering significant cost and time savings,” said Dr. Shuler, President and CEO, Hesperos, Inc and Professor Emeritus, Cornell University. “We are excited about the potential of this technology to help us ensure that potential new drug candidates have a higher probability of success during the clinical trial process.”

Understanding the inter-relationship between pharmacokinetics (PK), the drug’s time course for absorption, distribution, metabolism and excretion, and PD, the biological effect of a drug, is crucial in drug discovery and development. Scientists have learned that the maximum drug effect is not always driven by the peak drug concentration. In some cases, time is a critical factor influencing drug effect, but often this concentration-effect-time relationship only comes to light during the advanced stages of the preclinical program. In addition, often the data cannot be reliably extrapolated to humans.

“It is costly and time consuming to discover that potential drug candidates may have poor therapeutic qualities preventing their onward progression,” said James Hickman, Chief Scientist at Hesperos and Professor at the University of Central Florida. “Being able to define this during early drug discovery will be a valuable contribution to the optimization of potential new drug candidates.”

As demonstrated with the terfenadine experiment, the PKPD modelling approach was critical for understanding both the flux of compound between compartments as well as the resulting PD response in the context of dynamic exposure profiles of both parent and metabolite, as indicated by Dr. Shuler.

In order to test the viability of their system in a real-world drug discovery setting, the Hesperos team collaborated with scientists at AstraZeneca, to test one of their failed small molecules, known to have a CV [cardiovscular?] risk.

One of the main measurements used to assess the electrical properties of the heart is the QT interval, which approximates the time taken from when the cardiac ventricles start to contract to when they finish relaxing. Prolongation of the QT interval on the electrocardiogram can lead to a fatal arrhythmia known as Torsade de Pointes. Consequently, it is a mandatory requirement prior to first-in-human administration of potential new drug candidates that their ability to inhibit the hERG channel (a biomarker for QT prolongation) is investigated.

In the case of the AstraZeneca molecule, the molecule was assessed for hERG inhibition early on, and it was concluded to have a low potential to cause in vivo QT prolongation up to 100 μM. In later pre-clinical testing, the QT interval increased by 22% at a concentration of just 3 μM. Subsequent investigations found that a major metabolite was responsible. Hesperos was able to detect a clear PD effect at concentrations above 3 μM and worked to determine the mechanism of toxicity of the molecule.

The ability of these systems to assess cardiac function non-invasively in the presence of both parent molecule and metabolite over time, using multiplexed and repeat drug dosing regimes, provides an opportunity to run long-term studies for chronic administration of drugs to study their potential toxic effects.

Hesperos, Inc. is the first company spun out from the Tissue Chip Program at NCATS (National Center for Advancing Translational Sciences), which was established in 2011 to address the long timelines, steep costs and high failure rates associated with the drug development process. Hesperos currently is funded through NCATS’ Small Business Innovation Research program to undertake these studies and make tissue chips technology available as a service based company.

“The application of tissue chip technology in drug testing can lead to advances in predicting the potential effects of candidate medicines in people,” said Danilo Tagle, Ph.D., associate director for special initiatives at NCATS.

###

About Hesperos
Hesperos, Inc. is a leader in efforts to characterize an individual’s biology with human-on-a-chip microfluidic systems. Founders Michael L. Shuler and James J. Hickman have been at the forefront of every major scientific discovery in this realm, from individual organ-on-a-chip constructs to fully functional, interconnected multi-organ systems. With a mission to revolutionize toxicology testing as well as efficacy evaluation for drug discovery, the company has created pumpless platforms with serum-free cellular mediums that allow multi-organ system communication and integrated computational PKPD modeling of live physiological responses utilizing functional readouts from neurons, cardiac, muscle, barrier tissues and neuromuscular junctions as well as responses from liver, pancreas and barrier tissues. Created from human stem cells, the fully human systems are the first in vitro solutions that accurately utilize in vitro systems to predict in vivo functions without the use of animal models, as featured in Science. More information is available at http://www.
hesperosinc.com

Years ago I went to a congress focused on alternatives to animal testing (August 22, 2014 posting) and saw a video of heart cells in a petri dish (in vitro) beating in a heartlike rhythm. It was something like this,

ipscira
Published on Oct 17, 2010 https://www.youtube.com/watch?v=BqzW9Jq-OVA

I found it amazing as did the scientist who drew my attention to it. After, it’s just a collection of heart cells. How do they start beating and keep time with each other?

Getting back to the latest research, here’s a link and a citation for the paper,

On the potential of in vitro organ-chip models to define temporal pharmacokinetic-pharmacodynamic relationships by Christopher W. McAleer, Amy Pointon, Christopher J. Long, Rocky L. Brighton, Benjamin D. Wilkin, L. Richard Bridges, Narasimham Narasimhan Sriram, Kristin Fabre, Robin McDougall, Victorine P. Muse, Jerome T. Mettetal, Abhishek Srivastava, Dominic Williams, Mark T. Schnepper, Jeff L. Roles, Michael L. Shuler, James J. Hickman & Lorna Ewart. Scientific Reports volume 9, Article number: 9619 (2019) DOI: https://doi.org/10.1038/s41598-019-45656-4 Published: 03 July 2019

This paper is open access.

I happened to look at the paper and found good definitions of pharmacokinetics and pharmacodynamics. I know it’s not for everyone but if you’ve ever been curious about the difference (from the Introduction of On the potential of in vitro organ-chip models to define temporal pharmacokinetic-pharmacodynamic relationships),

Integrative pharmacology is a discipline that builds an understanding of the inter-relationship between pharmacokinetics (PK), the drug’s time course for absorption, distribution, metabolism and excretion and pharmacodynamics (PD), the biological effect of a drug. In drug discovery, this multi-variate approach guides medicinal chemists to modify structural properties of a drug molecule to improve its chance of becoming a medicine in a process known as “lead optimization”.

*More than one person and more than one company and more than one country claims pioneer status where ‘human-on-a-chip’ is concerned.

SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) and their art gallery from Aug. 12 – 16, 2018 (making the world synthetic) in Vancouver (Canada)

While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.

Introduction

For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),

Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.

Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.[1] The recent paper acceptance rate for SIGGRAPH has been less than 26%.[2] The submitted papers are peer-reviewed in a single-blind process.[3] There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress.[4][5] …

This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014.  The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,

While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.

“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”

SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”

That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.

Previews

I have four items about work that will be presented at SIGGRAPH 2018, First up, something about ‘redirected walking’ from a June 18, 2018 Association for Computing Machinery news release on EurekAlert,

CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.

All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.

“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”

Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.

The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.

The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”

The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.

Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.

###

About ACM, ACM SIGGRAPH, and SIGGRAPH 2018

ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.

They have provided an image illustrating what they mean (I don’t find it especially informative),

Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn

Next up (2), there’s Disney Corporation’s first virtual reality (VR) short, from a July 19, 2018  Association for Computing Machinery news release on EurekAlert,

Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.

Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.

“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”

For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.

SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.

“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”

This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”

Apparently this is a still from the ‘short’,

Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios

There’s also something (3) from Google as described in a July 26, 2018 Association of Computing Machinery news release on EurekAlert,

Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.

Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.

“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”

To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.

Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec

to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.

The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)

Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.

Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.

“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”

I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,

Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck

Finally (4), ‘synthesizing realistic sounds’ is announced in an Aug. 6, 2018 Stanford University (US) news release (also on EurekAlert) by Taylor Kubota,

Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.

“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”

The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.

“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”

Predicting sound

Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.

Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.

“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.

The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.

Challenges ahead

In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.

And, even in its current state, the results are worth the wait.

“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”

Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.

Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.

Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,

The researchers have also provided this image,

By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)

It does seem like we’re synthesizing the world around us, eh?

The SIGGRAPH 2018 art gallery

Here’s what SIGGRAPH had to say about its 2018 art gallery in Vancouver and the themes for the conference and the gallery (from a May 18, 2018 Associating for Computing Machinery news release on globalnewswire.com (also on this 2018 SIGGRAPH webpage),

SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.

The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.

Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”

He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”

Highlights from the 2018 Art Gallery include:

Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver

TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.

Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara

Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”

Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University

Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.

In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.

The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.

To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.

“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.

Art Papers highlights include:

Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth

This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.

Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong

The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.

Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University

“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.

What’s the what?

My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.

Faster diagnostics with nanoparticles and magnetic phenomenon discovered 170 years ago

A Jan. 19, 2017 news item on ScienceDaily announces some new research from the University of Central Florida (UCF),

A UCF researcher has combined cutting-edge nanoscience with a magnetic phenomenon discovered more than 170 years ago to create a method for speedy medical tests.

The discovery, if commercialized, could lead to faster test results for HIV, Lyme disease, syphilis, rotavirus and other infectious conditions.

“I see no reason why a variation of this technique couldn’t be in every hospital throughout the world,” said Shawn Putnam, an assistant professor in the University of Central Florida’s College of Engineering & Computer Science.

A Jan. 19, 2017 UCF news release by Mark Schlueb, which originated the news item,  provides more technical detail,

At the core of the research recently published in the academic journal Small are nanoparticles – tiny particles that are one-billionth of a meter. Putnam’s team coated nanoparticles with the antibody to BSA, or bovine serum albumin, which is commonly used as the basis of a variety of diagnostic tests.

By mixing the nanoparticles in a test solution – such as one used for a blood test – the BSA proteins preferentially bind with the antibodies that coat the nanoparticles, like a lock and key.

That reaction was already well known. But Putnam’s team came up with a novel way of measuring the quantity of proteins present. He used nanoparticles with an iron core and applied a magnetic field to the solution, causing the particles to align in a particular formation. As proteins bind to the antibody-coated particles, the rotation of the particles becomes sluggish, which is easy to detect with laser optics.

The interaction of a magnetic field and light is known as Faraday rotation, a principle discovered by scientist Michael Faraday in 1845. Putnam adapted it for biological use.

“It’s an old theory, but no one has actually applied this aspect of it,” he said.

Other antigens and their unique antibodies could be substituted for the BSA protein used in the research, allowing medical tests for a wide array of infectious diseases.

The proof of concept shows the method could be used to produce biochemical immunology test results in as little as 15 minutes, compared to several hours for ELISA, or enzyme-linked immunosorbent assay, which is currently a standard approach for biomolecule detection.

Here’s a link to and a citation for the paper,

High-Throughput, Protein-Targeted Biomolecular Detection Using Frequency-Domain Faraday Rotation Spectroscopy by Richard J. Murdock, Shawn A. Putnam, Soumen Das, Ankur Gupta, Elyse D. Z. Chase, and Sudipta Seal. Small DOI: 10.1002/smll.201602862 Version of Record online: 16 JAN 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Solar-powered clothing

This research comes from the University of Central Florida (US) and includes a pop culture reference to the movie “Back to the Future.”  From a Nov. 14, 2016 news item on phys.org,

Marty McFly’s self-lacing Nikes in Back to the Future Part II inspired a UCF scientist who has developed filaments that harvest and store the sun’s energy—and can be woven into textiles.

The breakthrough would essentially turn jackets and other clothing into wearable, solar-powered batteries that never need to be plugged in. It could one day revolutionize wearable technology, helping everyone from soldiers who now carry heavy loads of batteries to a texting-addicted teen who could charge his smartphone by simply slipping it in a pocket.

A Nov. 14, 2016 University of Central Florida news release (also on EurekAlert) by Mark Schlueb, which originated the news item, expands on the theme,

“That movie was the motivation,” Associate Professor Jayan Thomas, a nanotechnology scientist at the University of Central Florida’s NanoScience Technology Center, said of the film released in 1989. “If you can develop self-charging clothes or textiles, you can realize those cinematic fantasies – that’s the cool thing.”

Thomas already has been lauded for earlier ground-breaking research. Last year, he received an R&D 100 Award – given to the top inventions of the year worldwide – for his development of a cable that can not only transmit energy like a normal cable but also store energy like a battery. He’s also working on semi-transparent solar cells that can be applied to windows, allowing some light to pass through while also harvesting solar power.

His new work builds on that research.

“The idea came to me: We make energy-storage devices and we make solar cells in the labs. Why not combine these two devices together?” Thomas said.

Thomas, who holds joint appointments in the College of Optics & Photonics and the Department of Materials Science & Engineering, set out to do just that.

Taking it further, he envisioned technology that could enable wearable tech. His research team developed filaments in the form of copper ribbons that are thin, flexible and lightweight. The ribbons have a solar cell on one side and energy-storing layers on the other.

Though more comfortable with advanced nanotechnology, Thomas and his team then bought a small, tabletop loom. After another UCF scientists taught them to use it, they wove the ribbons into a square of yarn.

The proof-of-concept shows that the filaments could be laced throughout jackets or other outwear to harvest and store energy to power phones, personal health sensors and other tech gadgets. It’s an advancement that overcomes the main shortcoming of solar cells: The energy they produce must flow into the power grid or be stored in a battery that limits their portability.

“A major application could be with our military,” Thomas said. “When you think about our soldiers in Iraq or Afghanistan, they’re walking in the sun. Some of them are carrying more than 30 pounds of batteries on their bodies. It is hard for the military to deliver batteries to these soldiers in this hostile environment. A garment like this can harvest and store energy at the same time if sunlight is available.”

There are a host of other potential uses, including electric cars that could generate and store energy whenever they’re in the sun.

“That’s the future. What we’ve done is demonstrate that it can be made,” Thomas said. “It’s going to be very useful for the general public and the military and many other applications.”

The proof-of-concept shows that the filaments could be laced throughout jackets or other outwear to harvest and store energy to power phones, personal health sensors and other tech gadgets. It's an advancement that overcomes the main shortcoming of solar cells: the energy they produce must flow into the power grid or be stored in a battery that limits their portability. Credit: UCF Read more at: http://phys.org/news/2016-11-future-solar-nanotech-powered.html#jCp

The proof-of-concept shows that the filaments could be laced throughout jackets or other outwear to harvest and store energy to power phones, personal health sensors and other tech gadgets. It’s an advancement that overcomes the main shortcoming of solar cells: the energy they produce must flow into the power grid or be stored in a battery that limits their portability. Credit: UCF

Here’s a link to and a citation for the paper,

Wearable energy-smart ribbons for synchronous energy harvest and storage by Chao Li, Md. Monirul Islam, Julian Moore, Joseph Sleppy, Caleb Morrison, Konstantin Konstantinov, Shi Xue Dou, Chait Renduchintala, & Jayan Thomas. Nature Communications 7, Article number: 13319 (2016)  doi:10.1038/ncomms13319 Published online: 11 November 2016

This paper is open access.

Dexter Johnson in a Nov. 15, 2016 posting on his blog Nanoclast on the IEEE (Institute of Electrical and Electronics Engineers) provides context for this research and, in this excerpt, more insight from the researcher,

In a telephone interview with IEEE Spectrum, Thomas did concede that at this point, the supercapacitor was not capable of storing enough energy to replace the batteries entirely, but could be used to make a hybrid battery that would certainly reduce the load a soldier carries.

Thomas added: “By combining a few sets of ribbons (2-3 ribbons) in parallel and connecting these sets (3-4) in a series, it’s possible to provide enough power to operate a radio for 10 minutes. …

For anyone interested in knowing more about how this research fits into the field of textiles that harvest energy, I recommend reading Dexter’s piece.

“Breaking Me Softly” at the nanoscale

“Breaking Me Softly” sounds like a song title but in this case the phrase as been coined to describe a new technique for controlling materials at the nanoscale according to a June 6, 2016 news item on ScienceDaily,

A finding by a University of Central Florida researcher that unlocks a means of controlling materials at the nanoscale and opens the door to a new generation of manufacturing is featured online in the journal Nature.

Using a pair of pliers in each hand and gradually pulling taut a piece of glass fiber coated in plastic, associate professor Ayman Abouraddy found that something unexpected and never before documented occurred — the inner fiber fragmented in an orderly fashion.

“What we expected to see happen is NOT what happened,” he said. “While we thought the core material would snap into two large pieces, instead it broke into many equal-sized pieces.”

He referred to the technique in the Nature article title as “Breaking Me Softly.”

A June 6, 2016 University of Central Florida (UCF) news release (also on EurekAlert) by Barbara Abney, which originated the news item, expands on the theme,

The process of pulling fibers to force the realignment of the molecules that hold them together, known as cold drawing, has been the standard for mass production of flexible fibers like plastic and nylon for most of the last century.

Abouraddy and his team have shown that the process may also be applicable to multi-layered materials, a finding that could lead to the manufacturing of a new generation of materials with futuristic attributes.

“Advanced fibers are going to be pursuing the limits of anything a single material can endure today,” Abouraddy said.

For example, packaging together materials with optical and mechanical properties along with sensors that could monitor such vital sign as blood pressure and heart rate would make it possible to make clothing capable of transmitting vital data to a doctor’s office via the Internet.

The ability to control breakage in a material is critical to developing computerized processes for potential manufacturing, said Yuanli Bai, a fracture mechanics specialist in UCF’s College of Engineering and Computer Science.

Abouraddy contacted Bai, who is a co-author on the paper, about three years ago and asked him to analyze the test results on a wide variety of materials, including silicon, silk, gold and even ice.

He also contacted Robert S. Hoy, a University of South Florida physicist who specializes in the properties of materials like glass and plastic, for a better understanding of what he found.

Hoy said he had never seen the phenomena Abouraddy was describing, but that it made great sense in retrospect.

The research takes what has traditionally been a problem in materials manufacturing and turned it into an asset, Hoy said.

“Dr. Abouraddy has found a new application of necking” –  a process that occurs when cold drawing causes non-uniform strain in a material, Hoy said.  “Usually you try to prevent necking, but he exploited it to do something potentially groundbreaking.”

The necking phenomenon was discovered decades ago at DuPont and ushered in the age of textiles and garments made of synthetic fibers.

Abouraddy said that cold-drawing is what makes synthetic fibers like nylon and polyester useful. While those fibers are initially brittle, once cold-drawn, the fibers toughen up and become useful in everyday commodities. This discovery at DuPont at the end of the 1920s ushered in the age of textiles and garments made of synthetic fibers.

Only recently have fibers made of multiple materials become possible, he said.  That research will be the centerpiece of a $317 Million U.S. Department of Defense program focused on smart fibers that Abouraddy and UCF will assist with.   The Revolutionary Fibers and Textiles Manufacturing Innovation Institute (RFT-MII), led by the Massachusetts Institute of Technology, will incorporate research findings published in the Nature paper, Abouraddy said.

The implications for manufacturing of the smart materials of the future are vast.

By controlling the mechanical force used to pull the fiber and therefore controlling the breakage patterns, materials can be developed with customized properties allowing them to interact with each other and eternal forces such as the sun (for harvesting energy) and the internet in customizable ways.

A co-author on the paper, Ali P. Gordon, an associate professor in the Department of Mechanical & Aerospace Engineering and director of UCF’s Mechanics of Materials Research Group said that the finding is significant because it shows that by carefully controlling the loading condition imparted to the fiber, materials can be developed with tailored performance attributes.

“Processing-structure-property relationships need to be strategically characterized for complex material systems. By combining experiments, microscopy, and computational mechanics, the physical mechanisms of the fragmentation process were more deeply understood,” Gordon said.

Abouraddy teamed up with seven UCF scientists from the College of Optics & Photonics and the College of Engineering & Computer Science (CECS) to write the paper.   Additional authors include one researcher each from the Massachusetts Institute of Technology, Nanyang Technological University in Singapore and the University of South Florida.

Here’s a link to and a citation for the paper,

Controlled fragmentation of multimaterial fibres and films via polymer cold-drawing by Soroush Shabahang, Guangming Tao, Joshua J. Kaufman, Yangyang Qiao, Lei Wei, Thomas Bouchenot, Ali P. Gordon, Yoel Fink, Yuanli Bai, Robert S. Hoy & Ayman F. Abouraddy. Nature (2016) doi:10.1038/nature17980 Published online  06 June 2016

This paper is behind a paywall.

$1.4B for US National Nanotechnology Initiative (NNI) in 2017 budget

According to an April 1, 2016 news item on Nanowerk, the US National Nanotechnology (NNI) has released its 2017 budget supplement,

The President’s Budget for Fiscal Year 2017 provides $1.4 billion for the National Nanotechnology Initiative (NNI), affirming the important role that nanotechnology continues to play in the Administration’s innovation agenda. NNI
Cumulatively totaling nearly $24 billion since the inception of the NNI in 2001, the President’s 2017 Budget supports nanoscale science, engineering, and technology R&D at 11 agencies.

Another 9 agencies have nanotechnology-related mission interests or regulatory responsibilities.

An April 1, 2016 NNI news release, which originated the news item, affirms the Obama administration’s commitment to the NNI and notes the supplement serves as an annual report amongst other functions,

Throughout its two terms, the Obama Administration has maintained strong fiscal support for the NNI and has implemented new programs and activities to engage the broader nanotechnology community to support the NNI’s vision that the ability to understand and control matter at the nanoscale will lead to new innovations that will improve our quality of life and benefit society.

This Budget Supplement documents progress of these participating agencies in addressing the goals and objectives of the NNI. It also serves as the Annual Report for the NNI called for under the provisions of the 21st Century Nanotechnology Research and Development Act of 2003 (Public Law 108-153, 15 USC §7501). The report also addresses the requirement for Department of Defense reporting on its nanotechnology investments, per 10 USC §2358.

For additional details and to view the full document, visit www.nano.gov/2017BudgetSupplement.

I don’t seem to have posted about the 2016 NNI budget allotment but 2017’s $1.4B represents a drop of $100M since 2015’s $1.5 allotment.

The 2017 NNI budget supplement describes the NNI’s main focus,

Over the past year, the NNI participating agencies, the White House Office of Science and Technology Policy (OSTP), and the National Nanotechnology Coordination Office (NNCO) have been charting the future directions of the NNI, including putting greater focus on promoting commercialization and increasing education and outreach efforts to the broader nanotechnology community. As part of this effort, and in keeping with recommendations from the 2014 review of the NNI by the President’s Council of Advisors for Science and Technology, the NNI has been working to establish Nanotechnology-Inspired Grand Challenges, ambitious but achievable goals that will harness nanotechnology to solve National or global problems and that have the potential to capture the public’s imagination. Based upon inputs from NNI agencies and the broader community, the first Nanotechnology-Inspired Grand Challenge (for future computing) was announced by OSTP on October 20, 2015, calling for a collaborative effort to “create a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain.” This Grand Challenge has generated broad interest within the nanotechnology community—not only NNI agencies, but also industry, technical societies, and private foundations—and planning is underway to address how the agencies and the community will work together to achieve this goal. Topics for additional Nanotechnology-Inspired Grand Challenges are under review.

Interestingly, it also offers an explanation of the images on its cover (Note: Links have been removed),

US_NNI_2017_budget_cover

About the cover

Each year’s National Nanotechnology Initiative Supplement to the President’s Budget features cover images illustrating recent developments in nanotechnology stemming from NNI activities that have the potential to make major contributions to National priorities. The text below explains the significance of each of the featured images on this year’s cover.

US_NNI_2017_front_cover_CloseUp

Front cover featured images (above): Images illustrating three novel nanomedicine applications. Center: microneedle array for glucose-responsive insulin delivery imaged using fluorescence microscopy. This “smart insulin patch” is based on painless microneedles loaded with hypoxia-sensitive vesicles ~100 nm in diameter that release insulin in response to high glucose levels. Dr. Zhen Gu and colleagues at the University of North Carolina (UNC) at Chapel Hill and North Carolina State University have demonstrated that this patch effectively regulates the blood glucose of type 1 diabetic mice with faster response than current pH-sensitive formulations. The inset image on the lower right shows the structure of the nanovesicles; each microneedle contains more than 100 million of these vesicles. The research was supported by the American Diabetes Association, the State of North Carolina, the National Institutes of Health (NIH), and the National Science Foundation (NSF). Left: colorized rendering of a candidate universal flu vaccine nanoparticle. The vaccine molecule, developed at the NIH Vaccine Research Center, displays only the conserved part of the viral spike and stimulates the production of antibodies to fight against the ever-changing flu virus. The vaccine is engineered from a ~13 nm ferritin core (blue) combined with a 7 nm influenza antigen (green). Image credit: NIH National Institute of Allergy and Infectious Diseases (NIAID). Right: colorized scanning electron micrograph of Ebola virus particles on an infected VERO E6 cell. Blue represents individual Ebola virus particles. The image was produced by John Bernbaum and Jiro Wada at NIAID. When the Ebola outbreak struck in 2014, the Food and Drug Administration authorized emergency use of lateral flow immunoassays for Ebola detection that use gold nanoparticles for visual interpretation of the tests.

US_NNI_2017_back_cover._CloseUp

Back cover featured images (above): Images illustrating examples of NNI educational outreach activities. Center: Comic from the NSF/NNI competition Generation Nano: Small Science Superheroes. Illustration by Amina Khan, NSF. Left of Center: Polymer Nanocone Array (biomimetic of antimicrobial insect surface) by Kyle Nowlin, UNC-Greensboro, winner from the first cycle of the NNI’s student image contest, EnvisioNano. Right of Center: Gelatin Nanoparticles in Brain (nasal delivery of stroke medication to the brain) by Elizabeth Sawicki, University of Illinois at Urbana-Champaign, winner from the second cycle of EnvisioNano. Outside right: still photo from the video Chlorination-less (water treatment method using reusable nanodiamond powder) by Abelardo Colon and Jennifer Gill, University of Puerto Rico at Rio Piedras, the winning video from the NNI’s Student Video Contest. Outside left: Society of Emerging NanoTechnologies (SENT) student group at the University of Central Florida, one of the initial nodes in the developing U.S. Nano and Emerging Technologies Student Network; photo by Alexis Vilaboy.