Tag Archives: University of Central Florida

Human-on-a-chip predicts in vivo results based on in vitro model … for the first time

If successful the hope is that ‘human-on-a-chip’ will replace most, if not all, animal testing. This July 3, 2019 Hesperos news release (also on EurekAlert) suggests scientists are making serious gains in the drive to replace animal testing (Note: For anyone having difficulty with the terms, pharmacokinetics and pharmacodynamics, there are definitions towards the end of this posting, which may prove helpful),

Hesperos Inc., pioneers* of the “human-on-a-chip” in vitro system has announced the use of its innovative multi-organ model to successfully measure the concentration and metabolism of two known cardiotoxic small molecules over time, to accurately describe the drug behavior and toxic effects in vivo. The findings further support the potential of body-on-a-chip systems to transform the drug discovery process.

In a study published in Nature Scientific Reports, in collaboration with AstraZeneca, Hesperos described how they used a pumpless heart model and a heart:liver system to evaluate the temporal pharmacokinetic/pharmacodynamic (PKPD) relationship for terfenadine, an antihistamine that was banned due to toxic cardiac effects, as well as determine its mechanism of toxicity.

The study found there was a time-dependent, drug-induced response in the heart model. Further experiments were conducted, adding a metabolically competent liver module to the Hesperos Human-on-a-Chip® system to observe what happened when terfenadine was converted to fexofenadine. By doing so, the researchers were able to determine the driver of the pharmacodynamic (PD) effect and develop a mathematical model to predict the effect of terfenadine in preclinical species. This is the first time an in vitro human-on-a-chip system has been shown to predict in vivo outcomes, which could be used to predict clinical trial outcomes in the future.

“The ability to examine PKPD relationships in vitro would enable us to understand compound behavior prior to in vivo testing, offering significant cost and time savings,” said Dr. Shuler, President and CEO, Hesperos, Inc and Professor Emeritus, Cornell University. “We are excited about the potential of this technology to help us ensure that potential new drug candidates have a higher probability of success during the clinical trial process.”

Understanding the inter-relationship between pharmacokinetics (PK), the drug’s time course for absorption, distribution, metabolism and excretion, and PD, the biological effect of a drug, is crucial in drug discovery and development. Scientists have learned that the maximum drug effect is not always driven by the peak drug concentration. In some cases, time is a critical factor influencing drug effect, but often this concentration-effect-time relationship only comes to light during the advanced stages of the preclinical program. In addition, often the data cannot be reliably extrapolated to humans.

“It is costly and time consuming to discover that potential drug candidates may have poor therapeutic qualities preventing their onward progression,” said James Hickman, Chief Scientist at Hesperos and Professor at the University of Central Florida. “Being able to define this during early drug discovery will be a valuable contribution to the optimization of potential new drug candidates.”

As demonstrated with the terfenadine experiment, the PKPD modelling approach was critical for understanding both the flux of compound between compartments as well as the resulting PD response in the context of dynamic exposure profiles of both parent and metabolite, as indicated by Dr. Shuler.

In order to test the viability of their system in a real-world drug discovery setting, the Hesperos team collaborated with scientists at AstraZeneca, to test one of their failed small molecules, known to have a CV [cardiovscular?] risk.

One of the main measurements used to assess the electrical properties of the heart is the QT interval, which approximates the time taken from when the cardiac ventricles start to contract to when they finish relaxing. Prolongation of the QT interval on the electrocardiogram can lead to a fatal arrhythmia known as Torsade de Pointes. Consequently, it is a mandatory requirement prior to first-in-human administration of potential new drug candidates that their ability to inhibit the hERG channel (a biomarker for QT prolongation) is investigated.

In the case of the AstraZeneca molecule, the molecule was assessed for hERG inhibition early on, and it was concluded to have a low potential to cause in vivo QT prolongation up to 100 μM. In later pre-clinical testing, the QT interval increased by 22% at a concentration of just 3 μM. Subsequent investigations found that a major metabolite was responsible. Hesperos was able to detect a clear PD effect at concentrations above 3 μM and worked to determine the mechanism of toxicity of the molecule.

The ability of these systems to assess cardiac function non-invasively in the presence of both parent molecule and metabolite over time, using multiplexed and repeat drug dosing regimes, provides an opportunity to run long-term studies for chronic administration of drugs to study their potential toxic effects.

Hesperos, Inc. is the first company spun out from the Tissue Chip Program at NCATS (National Center for Advancing Translational Sciences), which was established in 2011 to address the long timelines, steep costs and high failure rates associated with the drug development process. Hesperos currently is funded through NCATS’ Small Business Innovation Research program to undertake these studies and make tissue chips technology available as a service based company.

“The application of tissue chip technology in drug testing can lead to advances in predicting the potential effects of candidate medicines in people,” said Danilo Tagle, Ph.D., associate director for special initiatives at NCATS.

###

About Hesperos
Hesperos, Inc. is a leader in efforts to characterize an individual’s biology with human-on-a-chip microfluidic systems. Founders Michael L. Shuler and James J. Hickman have been at the forefront of every major scientific discovery in this realm, from individual organ-on-a-chip constructs to fully functional, interconnected multi-organ systems. With a mission to revolutionize toxicology testing as well as efficacy evaluation for drug discovery, the company has created pumpless platforms with serum-free cellular mediums that allow multi-organ system communication and integrated computational PKPD modeling of live physiological responses utilizing functional readouts from neurons, cardiac, muscle, barrier tissues and neuromuscular junctions as well as responses from liver, pancreas and barrier tissues. Created from human stem cells, the fully human systems are the first in vitro solutions that accurately utilize in vitro systems to predict in vivo functions without the use of animal models, as featured in Science. More information is available at http://www.
hesperosinc.com

Years ago I went to a congress focused on alternatives to animal testing (August 22, 2014 posting) and saw a video of heart cells in a petri dish (in vitro) beating in a heartlike rhythm. It was something like this,

ipscira
Published on Oct 17, 2010 https://www.youtube.com/watch?v=BqzW9Jq-OVA

I found it amazing as did the scientist who drew my attention to it. After, it’s just a collection of heart cells. How do they start beating and keep time with each other?

Getting back to the latest research, here’s a link and a citation for the paper,

On the potential of in vitro organ-chip models to define temporal pharmacokinetic-pharmacodynamic relationships by Christopher W. McAleer, Amy Pointon, Christopher J. Long, Rocky L. Brighton, Benjamin D. Wilkin, L. Richard Bridges, Narasimham Narasimhan Sriram, Kristin Fabre, Robin McDougall, Victorine P. Muse, Jerome T. Mettetal, Abhishek Srivastava, Dominic Williams, Mark T. Schnepper, Jeff L. Roles, Michael L. Shuler, James J. Hickman & Lorna Ewart. Scientific Reports volume 9, Article number: 9619 (2019) DOI: https://doi.org/10.1038/s41598-019-45656-4 Published: 03 July 2019

This paper is open access.

I happened to look at the paper and found good definitions of pharmacokinetics and pharmacodynamics. I know it’s not for everyone but if you’ve ever been curious about the difference (from the Introduction of On the potential of in vitro organ-chip models to define temporal pharmacokinetic-pharmacodynamic relationships),

Integrative pharmacology is a discipline that builds an understanding of the inter-relationship between pharmacokinetics (PK), the drug’s time course for absorption, distribution, metabolism and excretion and pharmacodynamics (PD), the biological effect of a drug. In drug discovery, this multi-variate approach guides medicinal chemists to modify structural properties of a drug molecule to improve its chance of becoming a medicine in a process known as “lead optimization”.

*More than one person and more than one company and more than one country claims pioneer status where ‘human-on-a-chip’ is concerned.

SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) and their art gallery from Aug. 12 – 16, 2018 (making the world synthetic) in Vancouver (Canada)

While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.

Introduction

For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),

Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.

Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.[1] The recent paper acceptance rate for SIGGRAPH has been less than 26%.[2] The submitted papers are peer-reviewed in a single-blind process.[3] There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress.[4][5] …

This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014.  The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,

While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.

“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”

SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”

That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.

Previews

I have four items about work that will be presented at SIGGRAPH 2018, First up, something about ‘redirected walking’ from a June 18, 2018 Association for Computing Machinery news release on EurekAlert,

CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.

All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.

“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”

Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.

The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.

The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”

The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.

Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.

###

About ACM, ACM SIGGRAPH, and SIGGRAPH 2018

ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.

They have provided an image illustrating what they mean (I don’t find it especially informative),

Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn

Next up (2), there’s Disney Corporation’s first virtual reality (VR) short, from a July 19, 2018  Association for Computing Machinery news release on EurekAlert,

Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.

Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.

“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”

For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.

SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.

“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”

This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”

Apparently this is a still from the ‘short’,

Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios

There’s also something (3) from Google as described in a July 26, 2018 Association of Computing Machinery news release on EurekAlert,

Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.

Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.

“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”

To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.

Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec

to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.

The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)

Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.

Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.

“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”

I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,

Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck

Finally (4), ‘synthesizing realistic sounds’ is announced in an Aug. 6, 2018 Stanford University (US) news release (also on EurekAlert) by Taylor Kubota,

Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.

“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”

The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.

“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”

Predicting sound

Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.

Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.

“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.

The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.

Challenges ahead

In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.

And, even in its current state, the results are worth the wait.

“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”

Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.

Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.

Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,

The researchers have also provided this image,

By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)

It does seem like we’re synthesizing the world around us, eh?

The SIGGRAPH 2018 art gallery

Here’s what SIGGRAPH had to say about its 2018 art gallery in Vancouver and the themes for the conference and the gallery (from a May 18, 2018 Associating for Computing Machinery news release on globalnewswire.com (also on this 2018 SIGGRAPH webpage),

SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.

The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.

Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”

He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”

Highlights from the 2018 Art Gallery include:

Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver

TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.

Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara

Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”

Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University

Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.

In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.

The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.

To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.

“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.

Art Papers highlights include:

Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth

This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.

Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong

The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.

Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University

“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.

What’s the what?

My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.

Faster diagnostics with nanoparticles and magnetic phenomenon discovered 170 years ago

A Jan. 19, 2017 news item on ScienceDaily announces some new research from the University of Central Florida (UCF),

A UCF researcher has combined cutting-edge nanoscience with a magnetic phenomenon discovered more than 170 years ago to create a method for speedy medical tests.

The discovery, if commercialized, could lead to faster test results for HIV, Lyme disease, syphilis, rotavirus and other infectious conditions.

“I see no reason why a variation of this technique couldn’t be in every hospital throughout the world,” said Shawn Putnam, an assistant professor in the University of Central Florida’s College of Engineering & Computer Science.

A Jan. 19, 2017 UCF news release by Mark Schlueb, which originated the news item,  provides more technical detail,

At the core of the research recently published in the academic journal Small are nanoparticles – tiny particles that are one-billionth of a meter. Putnam’s team coated nanoparticles with the antibody to BSA, or bovine serum albumin, which is commonly used as the basis of a variety of diagnostic tests.

By mixing the nanoparticles in a test solution – such as one used for a blood test – the BSA proteins preferentially bind with the antibodies that coat the nanoparticles, like a lock and key.

That reaction was already well known. But Putnam’s team came up with a novel way of measuring the quantity of proteins present. He used nanoparticles with an iron core and applied a magnetic field to the solution, causing the particles to align in a particular formation. As proteins bind to the antibody-coated particles, the rotation of the particles becomes sluggish, which is easy to detect with laser optics.

The interaction of a magnetic field and light is known as Faraday rotation, a principle discovered by scientist Michael Faraday in 1845. Putnam adapted it for biological use.

“It’s an old theory, but no one has actually applied this aspect of it,” he said.

Other antigens and their unique antibodies could be substituted for the BSA protein used in the research, allowing medical tests for a wide array of infectious diseases.

The proof of concept shows the method could be used to produce biochemical immunology test results in as little as 15 minutes, compared to several hours for ELISA, or enzyme-linked immunosorbent assay, which is currently a standard approach for biomolecule detection.

Here’s a link to and a citation for the paper,

High-Throughput, Protein-Targeted Biomolecular Detection Using Frequency-Domain Faraday Rotation Spectroscopy by Richard J. Murdock, Shawn A. Putnam, Soumen Das, Ankur Gupta, Elyse D. Z. Chase, and Sudipta Seal. Small DOI: 10.1002/smll.201602862 Version of Record online: 16 JAN 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Solar-powered clothing

This research comes from the University of Central Florida (US) and includes a pop culture reference to the movie “Back to the Future.”  From a Nov. 14, 2016 news item on phys.org,

Marty McFly’s self-lacing Nikes in Back to the Future Part II inspired a UCF scientist who has developed filaments that harvest and store the sun’s energy—and can be woven into textiles.

The breakthrough would essentially turn jackets and other clothing into wearable, solar-powered batteries that never need to be plugged in. It could one day revolutionize wearable technology, helping everyone from soldiers who now carry heavy loads of batteries to a texting-addicted teen who could charge his smartphone by simply slipping it in a pocket.

A Nov. 14, 2016 University of Central Florida news release (also on EurekAlert) by Mark Schlueb, which originated the news item, expands on the theme,

“That movie was the motivation,” Associate Professor Jayan Thomas, a nanotechnology scientist at the University of Central Florida’s NanoScience Technology Center, said of the film released in 1989. “If you can develop self-charging clothes or textiles, you can realize those cinematic fantasies – that’s the cool thing.”

Thomas already has been lauded for earlier ground-breaking research. Last year, he received an R&D 100 Award – given to the top inventions of the year worldwide – for his development of a cable that can not only transmit energy like a normal cable but also store energy like a battery. He’s also working on semi-transparent solar cells that can be applied to windows, allowing some light to pass through while also harvesting solar power.

His new work builds on that research.

“The idea came to me: We make energy-storage devices and we make solar cells in the labs. Why not combine these two devices together?” Thomas said.

Thomas, who holds joint appointments in the College of Optics & Photonics and the Department of Materials Science & Engineering, set out to do just that.

Taking it further, he envisioned technology that could enable wearable tech. His research team developed filaments in the form of copper ribbons that are thin, flexible and lightweight. The ribbons have a solar cell on one side and energy-storing layers on the other.

Though more comfortable with advanced nanotechnology, Thomas and his team then bought a small, tabletop loom. After another UCF scientists taught them to use it, they wove the ribbons into a square of yarn.

The proof-of-concept shows that the filaments could be laced throughout jackets or other outwear to harvest and store energy to power phones, personal health sensors and other tech gadgets. It’s an advancement that overcomes the main shortcoming of solar cells: The energy they produce must flow into the power grid or be stored in a battery that limits their portability.

“A major application could be with our military,” Thomas said. “When you think about our soldiers in Iraq or Afghanistan, they’re walking in the sun. Some of them are carrying more than 30 pounds of batteries on their bodies. It is hard for the military to deliver batteries to these soldiers in this hostile environment. A garment like this can harvest and store energy at the same time if sunlight is available.”

There are a host of other potential uses, including electric cars that could generate and store energy whenever they’re in the sun.

“That’s the future. What we’ve done is demonstrate that it can be made,” Thomas said. “It’s going to be very useful for the general public and the military and many other applications.”

The proof-of-concept shows that the filaments could be laced throughout jackets or other outwear to harvest and store energy to power phones, personal health sensors and other tech gadgets. It's an advancement that overcomes the main shortcoming of solar cells: the energy they produce must flow into the power grid or be stored in a battery that limits their portability. Credit: UCF Read more at: http://phys.org/news/2016-11-future-solar-nanotech-powered.html#jCp

The proof-of-concept shows that the filaments could be laced throughout jackets or other outwear to harvest and store energy to power phones, personal health sensors and other tech gadgets. It’s an advancement that overcomes the main shortcoming of solar cells: the energy they produce must flow into the power grid or be stored in a battery that limits their portability. Credit: UCF

Here’s a link to and a citation for the paper,

Wearable energy-smart ribbons for synchronous energy harvest and storage by Chao Li, Md. Monirul Islam, Julian Moore, Joseph Sleppy, Caleb Morrison, Konstantin Konstantinov, Shi Xue Dou, Chait Renduchintala, & Jayan Thomas. Nature Communications 7, Article number: 13319 (2016)  doi:10.1038/ncomms13319 Published online: 11 November 2016

This paper is open access.

Dexter Johnson in a Nov. 15, 2016 posting on his blog Nanoclast on the IEEE (Institute of Electrical and Electronics Engineers) provides context for this research and, in this excerpt, more insight from the researcher,

In a telephone interview with IEEE Spectrum, Thomas did concede that at this point, the supercapacitor was not capable of storing enough energy to replace the batteries entirely, but could be used to make a hybrid battery that would certainly reduce the load a soldier carries.

Thomas added: “By combining a few sets of ribbons (2-3 ribbons) in parallel and connecting these sets (3-4) in a series, it’s possible to provide enough power to operate a radio for 10 minutes. …

For anyone interested in knowing more about how this research fits into the field of textiles that harvest energy, I recommend reading Dexter’s piece.

“Breaking Me Softly” at the nanoscale

“Breaking Me Softly” sounds like a song title but in this case the phrase as been coined to describe a new technique for controlling materials at the nanoscale according to a June 6, 2016 news item on ScienceDaily,

A finding by a University of Central Florida researcher that unlocks a means of controlling materials at the nanoscale and opens the door to a new generation of manufacturing is featured online in the journal Nature.

Using a pair of pliers in each hand and gradually pulling taut a piece of glass fiber coated in plastic, associate professor Ayman Abouraddy found that something unexpected and never before documented occurred — the inner fiber fragmented in an orderly fashion.

“What we expected to see happen is NOT what happened,” he said. “While we thought the core material would snap into two large pieces, instead it broke into many equal-sized pieces.”

He referred to the technique in the Nature article title as “Breaking Me Softly.”

A June 6, 2016 University of Central Florida (UCF) news release (also on EurekAlert) by Barbara Abney, which originated the news item, expands on the theme,

The process of pulling fibers to force the realignment of the molecules that hold them together, known as cold drawing, has been the standard for mass production of flexible fibers like plastic and nylon for most of the last century.

Abouraddy and his team have shown that the process may also be applicable to multi-layered materials, a finding that could lead to the manufacturing of a new generation of materials with futuristic attributes.

“Advanced fibers are going to be pursuing the limits of anything a single material can endure today,” Abouraddy said.

For example, packaging together materials with optical and mechanical properties along with sensors that could monitor such vital sign as blood pressure and heart rate would make it possible to make clothing capable of transmitting vital data to a doctor’s office via the Internet.

The ability to control breakage in a material is critical to developing computerized processes for potential manufacturing, said Yuanli Bai, a fracture mechanics specialist in UCF’s College of Engineering and Computer Science.

Abouraddy contacted Bai, who is a co-author on the paper, about three years ago and asked him to analyze the test results on a wide variety of materials, including silicon, silk, gold and even ice.

He also contacted Robert S. Hoy, a University of South Florida physicist who specializes in the properties of materials like glass and plastic, for a better understanding of what he found.

Hoy said he had never seen the phenomena Abouraddy was describing, but that it made great sense in retrospect.

The research takes what has traditionally been a problem in materials manufacturing and turned it into an asset, Hoy said.

“Dr. Abouraddy has found a new application of necking” –  a process that occurs when cold drawing causes non-uniform strain in a material, Hoy said.  “Usually you try to prevent necking, but he exploited it to do something potentially groundbreaking.”

The necking phenomenon was discovered decades ago at DuPont and ushered in the age of textiles and garments made of synthetic fibers.

Abouraddy said that cold-drawing is what makes synthetic fibers like nylon and polyester useful. While those fibers are initially brittle, once cold-drawn, the fibers toughen up and become useful in everyday commodities. This discovery at DuPont at the end of the 1920s ushered in the age of textiles and garments made of synthetic fibers.

Only recently have fibers made of multiple materials become possible, he said.  That research will be the centerpiece of a $317 Million U.S. Department of Defense program focused on smart fibers that Abouraddy and UCF will assist with.   The Revolutionary Fibers and Textiles Manufacturing Innovation Institute (RFT-MII), led by the Massachusetts Institute of Technology, will incorporate research findings published in the Nature paper, Abouraddy said.

The implications for manufacturing of the smart materials of the future are vast.

By controlling the mechanical force used to pull the fiber and therefore controlling the breakage patterns, materials can be developed with customized properties allowing them to interact with each other and eternal forces such as the sun (for harvesting energy) and the internet in customizable ways.

A co-author on the paper, Ali P. Gordon, an associate professor in the Department of Mechanical & Aerospace Engineering and director of UCF’s Mechanics of Materials Research Group said that the finding is significant because it shows that by carefully controlling the loading condition imparted to the fiber, materials can be developed with tailored performance attributes.

“Processing-structure-property relationships need to be strategically characterized for complex material systems. By combining experiments, microscopy, and computational mechanics, the physical mechanisms of the fragmentation process were more deeply understood,” Gordon said.

Abouraddy teamed up with seven UCF scientists from the College of Optics & Photonics and the College of Engineering & Computer Science (CECS) to write the paper.   Additional authors include one researcher each from the Massachusetts Institute of Technology, Nanyang Technological University in Singapore and the University of South Florida.

Here’s a link to and a citation for the paper,

Controlled fragmentation of multimaterial fibres and films via polymer cold-drawing by Soroush Shabahang, Guangming Tao, Joshua J. Kaufman, Yangyang Qiao, Lei Wei, Thomas Bouchenot, Ali P. Gordon, Yoel Fink, Yuanli Bai, Robert S. Hoy & Ayman F. Abouraddy. Nature (2016) doi:10.1038/nature17980 Published online  06 June 2016

This paper is behind a paywall.

$1.4B for US National Nanotechnology Initiative (NNI) in 2017 budget

According to an April 1, 2016 news item on Nanowerk, the US National Nanotechnology (NNI) has released its 2017 budget supplement,

The President’s Budget for Fiscal Year 2017 provides $1.4 billion for the National Nanotechnology Initiative (NNI), affirming the important role that nanotechnology continues to play in the Administration’s innovation agenda. NNI
Cumulatively totaling nearly $24 billion since the inception of the NNI in 2001, the President’s 2017 Budget supports nanoscale science, engineering, and technology R&D at 11 agencies.

Another 9 agencies have nanotechnology-related mission interests or regulatory responsibilities.

An April 1, 2016 NNI news release, which originated the news item, affirms the Obama administration’s commitment to the NNI and notes the supplement serves as an annual report amongst other functions,

Throughout its two terms, the Obama Administration has maintained strong fiscal support for the NNI and has implemented new programs and activities to engage the broader nanotechnology community to support the NNI’s vision that the ability to understand and control matter at the nanoscale will lead to new innovations that will improve our quality of life and benefit society.

This Budget Supplement documents progress of these participating agencies in addressing the goals and objectives of the NNI. It also serves as the Annual Report for the NNI called for under the provisions of the 21st Century Nanotechnology Research and Development Act of 2003 (Public Law 108-153, 15 USC §7501). The report also addresses the requirement for Department of Defense reporting on its nanotechnology investments, per 10 USC §2358.

For additional details and to view the full document, visit www.nano.gov/2017BudgetSupplement.

I don’t seem to have posted about the 2016 NNI budget allotment but 2017’s $1.4B represents a drop of $100M since 2015’s $1.5 allotment.

The 2017 NNI budget supplement describes the NNI’s main focus,

Over the past year, the NNI participating agencies, the White House Office of Science and Technology Policy (OSTP), and the National Nanotechnology Coordination Office (NNCO) have been charting the future directions of the NNI, including putting greater focus on promoting commercialization and increasing education and outreach efforts to the broader nanotechnology community. As part of this effort, and in keeping with recommendations from the 2014 review of the NNI by the President’s Council of Advisors for Science and Technology, the NNI has been working to establish Nanotechnology-Inspired Grand Challenges, ambitious but achievable goals that will harness nanotechnology to solve National or global problems and that have the potential to capture the public’s imagination. Based upon inputs from NNI agencies and the broader community, the first Nanotechnology-Inspired Grand Challenge (for future computing) was announced by OSTP on October 20, 2015, calling for a collaborative effort to “create a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain.” This Grand Challenge has generated broad interest within the nanotechnology community—not only NNI agencies, but also industry, technical societies, and private foundations—and planning is underway to address how the agencies and the community will work together to achieve this goal. Topics for additional Nanotechnology-Inspired Grand Challenges are under review.

Interestingly, it also offers an explanation of the images on its cover (Note: Links have been removed),

US_NNI_2017_budget_cover

About the cover

Each year’s National Nanotechnology Initiative Supplement to the President’s Budget features cover images illustrating recent developments in nanotechnology stemming from NNI activities that have the potential to make major contributions to National priorities. The text below explains the significance of each of the featured images on this year’s cover.

US_NNI_2017_front_cover_CloseUp

Front cover featured images (above): Images illustrating three novel nanomedicine applications. Center: microneedle array for glucose-responsive insulin delivery imaged using fluorescence microscopy. This “smart insulin patch” is based on painless microneedles loaded with hypoxia-sensitive vesicles ~100 nm in diameter that release insulin in response to high glucose levels. Dr. Zhen Gu and colleagues at the University of North Carolina (UNC) at Chapel Hill and North Carolina State University have demonstrated that this patch effectively regulates the blood glucose of type 1 diabetic mice with faster response than current pH-sensitive formulations. The inset image on the lower right shows the structure of the nanovesicles; each microneedle contains more than 100 million of these vesicles. The research was supported by the American Diabetes Association, the State of North Carolina, the National Institutes of Health (NIH), and the National Science Foundation (NSF). Left: colorized rendering of a candidate universal flu vaccine nanoparticle. The vaccine molecule, developed at the NIH Vaccine Research Center, displays only the conserved part of the viral spike and stimulates the production of antibodies to fight against the ever-changing flu virus. The vaccine is engineered from a ~13 nm ferritin core (blue) combined with a 7 nm influenza antigen (green). Image credit: NIH National Institute of Allergy and Infectious Diseases (NIAID). Right: colorized scanning electron micrograph of Ebola virus particles on an infected VERO E6 cell. Blue represents individual Ebola virus particles. The image was produced by John Bernbaum and Jiro Wada at NIAID. When the Ebola outbreak struck in 2014, the Food and Drug Administration authorized emergency use of lateral flow immunoassays for Ebola detection that use gold nanoparticles for visual interpretation of the tests.

US_NNI_2017_back_cover._CloseUp

Back cover featured images (above): Images illustrating examples of NNI educational outreach activities. Center: Comic from the NSF/NNI competition Generation Nano: Small Science Superheroes. Illustration by Amina Khan, NSF. Left of Center: Polymer Nanocone Array (biomimetic of antimicrobial insect surface) by Kyle Nowlin, UNC-Greensboro, winner from the first cycle of the NNI’s student image contest, EnvisioNano. Right of Center: Gelatin Nanoparticles in Brain (nasal delivery of stroke medication to the brain) by Elizabeth Sawicki, University of Illinois at Urbana-Champaign, winner from the second cycle of EnvisioNano. Outside right: still photo from the video Chlorination-less (water treatment method using reusable nanodiamond powder) by Abelardo Colon and Jennifer Gill, University of Puerto Rico at Rio Piedras, the winning video from the NNI’s Student Video Contest. Outside left: Society of Emerging NanoTechnologies (SENT) student group at the University of Central Florida, one of the initial nodes in the developing U.S. Nano and Emerging Technologies Student Network; photo by Alexis Vilaboy.

$5.2M in nanotechnology grants from the US Department of Agriculture (USDA)

A March 30, 2016 news item on Nanowerk announces the 2016 nanotechnology grants from the US Dept. of Agriculture (USDA),

Agriculture Secretary Tom Vilsack today [March 30, 2016] announced an investment of more than $5.2 million to support nanotechnology research at 11 universities. The universities will research ways nanotechnology can be used to improve food safety, enhance renewable fuels, increase crop yields, manage agricultural pests, and more. The awards were made through the Agriculture and Food Research Initiative (AFRI), the nation’s premier competitive, peer-reviewed grants program for fundamental and applied agricultural sciences.

A March 30, 2016 USDA news release provides more detail,

“In the seven years since the Agriculture and Food Research Initiative was established, the program has led to true innovations and ground-breaking discoveries in agriculture to combat childhood obesity, improve and sustain rural economic growth, address water availability issues, increase food production, find new sources of energy, mitigate the impacts of climate variability and enhance resiliency of our food systems, and ensure food safety. Nanoscale science, engineering, and technology are key pieces of our investment in innovation to ensure an adequate and safe food supply for a growing global population,” said Vilsack. “The President’s 2017 Budget calls for full funding of the Agriculture and Food Research Initiative so that USDA can continue to support important projects like these.”

Universities receiving funding include Auburn University in Auburn, Ala.; Connecticut Agricultural Experiment Station in New Haven, Conn.; University of Central Florida in Orlando, Fla; University of Georgia in Athens, Ga.; Iowa State University in Ames, Iowa; University of Massachusetts in Amherst, Mass.; Mississippi State University in Starkville, Miss.; Lincoln University in Jefferson City, Mo.; Clemson University in Clemson, S.C.; Virginia Polytechnic Institute and State University in Blacksburg, Va.; and University of Wisconsin in Madison, Wis.

With this funding, Auburn University proposes to improve pathogen monitoring throughout the food supply chain by creating a user-friendly system that can detect multiple foodborne pathogens simultaneously, accurately, cost effectively, and rapidly. Mississippi State University will research ways nanochitosan can be used as a combined fire-retardant and antifungal wood treatment that is also environmentally safe. Experts in nanotechnology, molecular biology, vaccines and poultry diseases at the University of Wisconsin will work to develop nanoparticle-based poultry vaccines to prevent emerging poultry infections. USDA has a full list of projects and longer descriptions available online.

Past projects include a University of Georgia project developing a bio-nanocomposites-based, disease-specific, electrochemical sensors for detecting fungal pathogen induced volatiles in selected crops; and a University of Massachusetts project creating a platform for pathogen detection in foods that is superior to the current detection method in terms of analytical time, sensitivity, and accuracy using a novel, label-free, surface-enhanced Raman scattering (SERS) mapping technique.

The purpose of AFRI is to support research, education, and extension work by awarding grants that address key problems of national, regional, and multi-state importance in sustaining all components of food and agriculture. AFRI is the flagship competitive grant program administered by USDA’s National Institute of Food and Agriculture [NIFA]. Established under the 2008 Farm Bill, AFRI supports work in six priority areas: plant health and production and plant products; animal health and production and animal products; food safety, nutrition and health; bioenergy, natural resources and environment; agriculture systems and technology; and agriculture economics and rural communities. Since AFRI’s creation, NIFA has awarded more than $89 million to solve challenges related to plant health and production; $22 million of this has been dedicated to nanotechnology research. The President’s 2017 budget request proposes to fully fund AFRI for $700 million; this amount is the full funding level authorized by Congress when it established AFRI in the 2008 Farm Bill.

Each day, the work of USDA scientists and researchers touches the lives of all Americans: from the farm field to the kitchen table and from the air we breathe to the energy that powers our country. USDA science is on the cutting edge, helping to protect, secure, and improve our food, agricultural and natural resources systems. USDA research develops and transfers solutions to agricultural problems, supporting America’s farmers and ranchers in their work to produce a safe and abundant food supply for more than 100 years. This work has helped feed the nation and sustain an agricultural trade surplus since the 1960s. Since 2009, USDA has invested $4.32 billion in research and development grants. Studies have shown that every dollar invested in agricultural research now returns over $20 to our economy.

Since 2009, NIFA has invested in and advanced innovative and transformative initiatives to solve societal challenges and ensure the long-term viability of agriculture. NIFA’s integrated research, education, and extension programs, supporting the best and brightest scientists and extension personnel, have resulted in user-inspired, groundbreaking discoveries that are combating childhood obesity, improving and sustaining rural economic growth, addressing water availability issues, increasing food production, finding new sources of energy, mitigating climate variability, and ensuring food safety.

Some Baba Brinkman rap videos for Christmas

It’s about time to catch up with Canadian rapper, Baba Brinkman who has made an industry of rapping about science issues (mostly). Here’s a brief rundown of some of his latest ventures.

He was in Paris for the climate talks (also known as World Climate Change Conference 2015 [COP21]) and produced this ‘live’ rap on Dec. 10, 2015 for the press conference on “Moral Obligation – Scientific Imperative” for Climate Matters,

The piece is part of his forthcoming album and show “The Rap Guide to Climate Chaos.”

On Dec. 18, 2015 Baba released a new music video with his take on religion and science (from a Dec. 18, 2015 posting on his blog),

The digital animation is by Steven Fahey, who is a full time animator for the Simpsons, and I’m completely blown away by the results he achieved. The video is about the evolution of religious instincts, and how the secular among us can make sense of beliefs we don’t share.

Here’s the ‘Religion evolves’ video,

A few days after Baba released his video, new research was published contradicting some of what he has in there (i.e., religion as a binding element for societies struggling to survive in ancient times. From a Dec. 21, 2015 University of Central Florida news release on EurekAlert (Note: A link has been removed),

Humans haven’t learned much in more than 2,000 years when it comes to religion and politics.

Religion has led to social tension and conflict, not just in today’s society, but dating back to 700 B.C. according to a new study published today in Current Anthropology .

University of Colorado anthropology Professor Arthur A. Joyce and University of Central Florida Associate Professor Sarah Barber found evidence in several Mexican archeological sites that contradict the long-held belief that religion acted to unite early state societies. It often had the opposite effect, the study says.

“It doesn’t matter if we today don’t share particular religious beliefs, but when people in the past acted on their beliefs, those actions could have real, material consequences,” Barber said about the team’s findings. “It really behooves us to acknowledge religion when considering political processes.”

Sounds like sage advice in today’s world that has multiple examples of politics and religion intersecting and resulting in conflict.

The team published its findings “Ensoulment, Entrapment, and Political Centralization: A Comparative Study of Religion and Politics in Later Formative Oaxaca,” after spending several years conducting field research in the lower Río Verde valley of Oaxaca, Mexico’s Pacific coastal lowlands. They compared their results with data from the highland Valley of Oaxaca.

Their study viewed archaeological evidence from 700 B.C. to A.D. 250, a period identified as a time of the emergence of states in the region. In the lower Verde, religious rituals involving offerings and the burial of people in cemeteries at smaller communities created strong ties to the local community that impeded the creation of state institutions.

And in the Valley of Oaxaca, elites became central to mediating between their communities and the gods, which eventually triggered conflict with traditional community leaders. It culminated in the emergence of a regional state with its capital at the hilltop city of Monte Albán.

“In both the Valley of Oaxaca and the Lower Río Verde Valley, religion was important in the formation and history of early cities and states, but in vastly different ways,” said Joyce, lead author on the study. “Given the role of religion in social life and politics today, that shouldn’t be too surprising.”

The conflict in the lower Río Verde valley is evident in rapid rise and fall of its state institutions. At Río Viejo, the capital of the lower Verde state, people had built massive temples by AD 100. Yet these impressive, labor-intensive buildings, along with many towns throughout the valley, were abandoned a little over a century later.

“An innovative aspect of our research is to view the burials of ancestors and ceremonial offerings in the lower Verde as essential to these ancient communities,” said Joyce, whose research focuses on both political life and ecology in ancient Mesoamerica. “Such a perspective is also more consistent with the worldviews of the Native Americans that lived there.”

Here’s a link to and a citation for the paper,

Ensoulment, Entrapment, and Political Centralization A Comparative Study of Religion and Politics in Later Formative Oaxaca by Arthur A. Joyce and Sarah B. Barber. Current Anthropology Vol. 56, No. 6 (December 2015), pp. 819-847 DOI: 10.1086/683998

This paper is behind a paywall.

Getting back to Baba, having research, which contradicts or appears to contradict your position, suddenly appear is part of the scientific process. Making your work scientifically authentic adds pressure for a performer or artist, on the other hand, it also blesses that performer or artist with credibility. In any event, it’s well worth checking out Baba’s website and, for anyone, who’s wanted to become a patron of the arts (or of a particular rapper), there’s this Dec. 3, 2015 posting on Baba’s blog about Patreon,

Every year or so since 2010 I’ve reached out to my friends and fans asking for help with a Kickstarter or IndieGogo campaign to fund my latest album or video project. Well now I’m hoping to put an end to that regular cycle with the help of Patreon, a site that lets fans become patrons with exclusive access to the artists they support and the work they help create.

Click here to visit Patreon.com/BabaBrinkman

Good luck Baba. (BTW, Currently living in New York with his scientist wife and child, he’s originally from the Canadian province of British Columbia.)

Corrections: Hybrid Photonic-Nanomechanical Force Microscopy uses vibration for better chemical analysis

*ETA  Nov. 4, 2015: I’m apologizing to anyone wishing to read this posting as it’s a bit of a mess. I deeply regret mishandling the situation. In future, I shall not be taking any corrections from individual researchers to materials such as news releases that have been issued by an institution. Whether or not the individual researchers are happy with how their contributions or how a colleague’s contributions or how their home institutions have been characterized is a matter for them and their home institutions.

The August 10, 2015 ORNL news release with all the correct details has been added to the end of this post.*

A researcher at the University of Central Florida (UCF) has developed a microscope that uses vibrations for better analysis of chemical composition. From an Aug. 10, 2015 news item on Nanowerk,

It’s a discovery that could have promising implications for fields as varied as biofuel production, solar energy, opto-electronic devices, pharmaceuticals and medical research.

“What we’re interested in is the tools that allow us to understand the world at a very small scale,” said UCF professor Laurene Tetard, formerly of the Oak Ridge National Laboratory. “Not just the shape of the object, but its mechanical properties, its composition and how it evolves in time.”

An Aug. 10, 2015 UCF news release (also on EurekAlert), which originated the news item, describes the limitations of atomic force microscopy and gives a few details about the hybrid microscope (Note: A link has been removed),

For more than two decades, scientists have used atomic force microscopy – a probe that acts like an ultra-sensitive needle on a record player – to determine the surface characteristics of samples at the microscopic scale. A “needle” that comes to an atoms-thin point traces a path over a sample, mapping the surface features at a sub-cellular level [nanoscale].

But that technology has its limits. It can determine the topographical characteristics of [a] sample, but it can’t identify its composition. And with the standard tools currently used for chemical mapping, anything smaller than roughly half a micron is going to look like a blurry blob, so researchers are out of luck if they want to study what’s happening at the molecular level.

A team led by Tetard has come up with a hybrid form of that technology that produces a much clearer chemical image. As described Aug. 10 in the journal Nature Nanotechnology, Hybrid Photonic-Nanomechanical Force Microscopy (HPFM) can discern a sample’s topographic characteristics together with the chemical properties at a much finer scale.

The HPFM method is able to identify materials based on differences in the vibration produced when they’re subjected to different wavelengths of light – essentially a material’s unique “fingerprint.”

“What we are developing is a completely new way of making that detection possible,” said Tetard, who has joint appointments to UCF’s Physics Department, Material Science and Engineering Department and the NanoScience Technology Center.

The researchers proved the effectiveness of HPFM while examining samples from an eastern cottonwood tree, a potential source of biofuel. By examining the plant samples at the nanoscale, the researchers for the first time were able to determine the molecular traits of both untreated and chemically processed cottonwood inside the plant cell walls.

The research team included Tetard; Ali Passian, R.H. Farahi and Brian Davison, all of Oak Ridge National Laboratory; and Thomas Thundat of the University of Alberta.

Long term, the results will help reveal better methods for producing the most biofuel from the cottonwood, a potential boon for industry. Likewise, the new method could be used to examine samples of myriad plants to determine whether they’re good candidates for biofuel production.

Potential uses of the technology go beyond the world of biofuel. Continued research may allow HPFM to be used as a probe so, for instance, it would be possible to study the effect of new treatments being developed to save plants such as citrus trees from bacterial diseases rapidly decimating the citrus industry, or study fundamental photonically-induced processes in complex systems such as in solar cell materials or opto-electronic devices.

Here’s a link to and a citation for the paper,

Opto-nanomechanical spectroscopic material characterization by L. Tetard, A. Passian, R. H. Farahi, T. Thundat, & B. H. Davison. Nature Nanotechnology (2015) doi:10.1038/nnano.2015.168 Published online 10 August 2015

This paper is behind a paywall.

*ETA August 27, 2015:

August 10, 2015 ORNL news release (Note: Funding information and a link to the paper [previously given] have been removed):

A microscope being developed at the Department of Energy’s Oak Ridge National Laboratory will allow scientists studying biological and synthetic materials to simultaneously observe chemical and physical properties on and beneath the surface.

The Hybrid Photonic Mode-Synthesizing Atomic Force Microscope is unique, according to principal investigator Ali Passian of ORNL’s Quantum Information System group. As a hybrid, the instrument, described in a paper published in Nature Nanotechnology, combines the disciplines of nanospectroscopy and nanomechanical microscopy.

“Our microscope offers a noninvasive rapid method to explore materials simultaneously for their chemical and physical properties,” Passian said. “It allows researchers to study the surface and subsurface of synthetic and biological samples, which is a capability that until now didn’t exist.”

ORNL’s instrument retains all of the advantages of an atomic force microscope while simultaneously offering the potential for discoveries through its high resolution and subsurface spectroscopic capabilities.

“The originality of the instrument and technique lies in its ability to provide information about a material’s chemical composition in the broad infrared spectrum of the chemical composition while showing the morphology of a material’s interior and exterior with nanoscale – a billionth of a meter – resolution,” Passian said.

Researchers will be able to study samples ranging from engineered nanoparticles and nanostructures to naturally occurring biological polymers, tissues and plant cells.

The first application as part of DOE’s BioEnergy Science Center was in the examination of plant cell walls under several treatments to provide submicron characterization. The plant cell wall is a layered nanostructure of biopolymers such as cellulose. Scientists want to convert such biopolymers to free the useful sugars and release energy.

An earlier instrument, also invented at ORNL, provided imaging of poplar cell wall structures that yielded unprecedented topological information, advancing fundamental research in sustainable biofuels.

Because of this new instrument’s impressive capabilities, the researcher team envisions broad applications.
“An urgent need exists for new platforms that can tackle the challenges of subsurface and chemical characterization at the nanometer scale,” said co-author Rubye Farahi. “Hybrid approaches such as ours bring together multiple capabilities, in this case, spectroscopy and high-resolution microscopy.”

Looking inside, the hybrid microscope consists of a photonic module that is incorporated into a mode-synthesizing atomic force microscope. The modular aspect of the system makes it possible to accommodate various radiation sources such as tunable lasers and non-coherent monochromatic or polychromatic sources.

ETA2 August 27, 2015: I’ve received an email from one of the paper’s authors (RH Farahi of the US Oak Ridge National Laboratory [ORNL]) who claims some inaccuracies in this piece.  The news release supplied by the University of Central Florida states that Dr. Tetard led the team and that is not so. According to Dr. Farahi, she had a postdoctoral position on the team which she left two years ago. You might also get the impression that some of the work was performed at the University of Central Florida. That is not so according to Dr. Farahi.  As a courtesy Dr. Tetard was retained as first author of the paper.

*Nov. 4, 2015: I suspect some of the misunderstanding was due to overeagerness and/or time pressures. Whoever wrote the news release may have made some assumptions. It’s very easy to make a mistake when talking to an ebullient scientist who can unintentionally lead you to believe something that’s not so. I worked in a high tech company and believed that there was some new software being developed which turned out to be a case of high hopes. Luckily, I said something that triggered a rapid rebuttal to the fantasies. Getting back to this situation, other contributing factors could include the writer not having time to get the news release reviewed the scientist or the scientist skimming the release and missing a few bits due to time pressure.*

Silver nanoparticles and wormwood tackle plant-killing fungus

I’m back in Florida (US), so to speak. Last mentioned here in an April 7, 2015 post about citrus canker and zinkicide, a story about a disease which endangers citrus production in the US, this latest story concerns a possible solution to the problem of a fungus, which attacks ornamental horticultural plants in Florida. From a May 5, 2015 news item on Azonano,

Deep in the soil, underneath more than 400 plant and tree species, lurks a lethal fungus threatening Florida’s $15 billion a year ornamental horticulture industry.

But University of Florida plant pathologist G. Shad Ali has found an economical and eco-friendly way to combat the plant destroyer known as phytophthora before it attacks the leaves and roots of everything from tomato plants to oak trees.

Ali and a team of researchers with UF’s Institute of Food and Agricultural Sciences, along with the University of Central Florida and the New Jersey Institute of Technology, have found that silver nanoparticles produced with an extract of wormwood, an herb with strong antioxidant properties, can stop several strains of the deadly fungus.

A May 4, 2015 University of Florida news release, which originated the news item, describes the work in more detail,

“The silver nanoparticles are extremely effective in eliminating the fungus in all stages of its life cycle,” Ali said. “In addition, it has no adverse effects on plant growth.” [emphasis mine]

The silver nanoparticles measure 5 to 100 nanometers in diameter – about one one-thousandth the width of a human hair. Once the nanoparticles are sprayed onto a plant, they shield it from fungus. Since the nanoparticles display multiple ways of inhibiting fungus growth, the chances of pathogens developing resistance to them are minimized, Ali said. Because of that, they may be used for controlling fungicide-resistant plant pathogens more effectively.

That’s good news for the horticulture industry. Worldwide crop losses due to phytophthora fungus diseases are estimated to be in the multibillion dollar range, with $6.7 billion in losses in potato crops due to late blight – the cause of the Irish Potato Famine in the mid-1800s when more than 1 million people died – and $1 billion to $2 billion in soybean loss.

Silver nanoparticles are being investigated for applications in various industries, including medicine, diagnostics, cosmetics and food processing.  They already are used in wound dressings, food packaging and in consumer products such as textiles and footwear for fighting odor-causing microorganisms.

Other members of the UF research team were Mohammad Ali, a visiting doctoral student from the Quaid-i-Azam University, Islamabad, Pakistan; David Norman and Mary Brennan with the University of Florida’s Plant Pathology-Mid Florida Research and Education Center; Bosung Kim with the University of Central Florida’s chemistry department; Kevin Belfield with the College of Science and Liberal Arts at the New Jersey Institute of Technology and the University of Central Florida’s chemistry department.

Ali’s comment about silver nanoparticles not having any adverse effects on plant growth is in contrast to findings by Mark Wiesner and other researchers at  Duke University (North Carolina, US). From my Feb. 28, 2013 posting (which also features a Finnish-Estonia study showing no adverse effects from silver nanoparticles  in crustaceans),

… there’s a study from Duke University suggests that silver nanoparticles in wastewater which is later put to agricultural use may cause problems. From the Feb. 27, 2013 news release on EurekAlert,

In experiments mimicking a natural environment, Duke University researchers have demonstrated that the silver nanoparticles used in many consumer products can have an adverse effect on plants and microorganisms.

The main route by which these particles enter the environment is as a by-product of water and sewage treatment plants. [emphasis] The nanoparticles are too small to be filtered out, so they and other materials end up in the resulting “sludge,” which is then spread on the land surface as a fertilizer.

The researchers found that one of the plants studied, a common annual grass known as Microstegium vimeneum, had 32 percent less biomass in the mesocosms treated with the nanoparticles. Microbes were also affected by the nanoparticles, Colman [Benjamin Colman, a post-doctoral fellow in Duke’s biology department and a member of the Center for the Environmental Implications of Nanotechnology (CEINT)] said. One enzyme associated with helping microbes deal with external stresses was 52 percent less active, while another enzyme that helps regulate processes within the cell was 27 percent less active. The overall biomass of the microbes was also 35 percent lower, he said.

“Our field studies show adverse responses of plants and microorganisms following a single low dose of silver nanoparticles applied by a sewage biosolid,” Colman said. “An estimated 60 percent of the average 5.6 million tons of biosolids produced each year is applied to the land for various reasons, and this practice represents an important and understudied route of exposure of natural ecosystems to engineered nanoparticles.”

“Our results show that silver nanoparticles in the biosolids, added at concentrations that would be expected, caused ecosystem-level impacts,” Colman said. “Specifically, the nanoparticles led to an increase in nitrous oxide fluxes, changes in microbial community composition, biomass, and extracellular enzyme activity, as well as species-specific effects on the above-ground vegetation.”

Getting back to Florida, you can find Ali’s abstract here,

Inhibition of Phytophthora parasitica and P. capsici by silver nanoparticles synthesized using aqueous extract of Artemisia absinthium by Mohammad Ali, Bosung Kim, Kevin Belfield, David J. Norman, Mary Brennan, & Gul Shad Ali. Phytopathology  http://dx.doi.org/10.1094/PHYTO-01-15-0006-R Published online April 14, 2015

This paper is behind a paywall.

For anyone who recognized that wormwood is a constituent of Absinthe, a liquor that is banned in many parts of the world due to possible side effects associated with the wormwood, here’s more about it from the Wormwood overview page on WebMD (Note: Links have been removed),

Wormwood is an herb. The above-ground plant parts and oil are used for medicine.

Wormwood is used in some alcoholic beverages. Vermouth, for example, is a wine beverage flavored with extracts of wormwood. Absinthe is another well-known alcoholic beverage made with wormwood. It is an emerald-green alcoholic drink that is prepared from wormwood oil, often along with other dried herbs such as anise and fennel. Absinthe was popularized by famous artists and writers such as Toulouse-Lautrec, Degas, Manet, van Gogh, Picasso, Hemingway, and Oscar Wilde. It is now banned in many countries, including the U.S. But it is still allowed in European Union countries as long as the thujone content is less than 35 mg/kg. Thujone is a potentially poisonous chemical found in wormwood. Distilling wormwood in alcohol increases the thujone concentration.

Returning to the matter at hand, as I’ve noted previously elsewhere, research into the toxic effects associated with nanomaterials (e.g. silver nanoparticles) is a complex process.