Tag Archives: UCLA

Curcumin gel for burns and scalds

The curcumin debate continues (see my  Jan. 26, 2017 posting titled: Curcumin: a scientific literature review concludes health benefits may be overstated for more about that). In the meantime, scientists at the University of California at Los Angeles’ (UCLA) David Geffen School of Medicine found that curcumin gel might be effective as a treatment for burns. From a March 14, 2017 Pensoft Publishers news release on EurekAlert (Note: Links have been removed),

What is the effect of Topical Curcumin Gel for treating burns and scalds? In a recent research paper, published in the open access journal BioDiscovery, Dr. Madalene Heng, Clinical Professor of Dermatology at the David Geffen School of Medicine, stresses that use of topical curcumin gel for treating skin problems, like burns and scalds, is very different, and appears to work more effectively, when compared to taking curcumin tablets by mouth for other conditions.

“Curcumin gel appears to work much better when used on the skin because the gel preparation allows curcumin to penetrate the skin, inhibit phosphorylase kinase and reduce inflammation,” explains Dr Heng.

In this report, use of curcumin after burns and scalds were found to reduce the severity of the injury, lessen pain and inflammation, and improve healing with less than expected scarring, or even no scarring, of the affected skin. Dr. Heng reports her experience using curcumin gel on such injuries using three examples of patients treated after burns and scalds, and provides a detailed explanation why topical curcumin may work on such injuries.

Curcumin is an ingredient found in the common spice turmeric. Turmeric has been used as a spice for centuries in many Eastern countries and gives well known dishes, such as curry, their typical yellow-gold color. The spice has also been used for cosmetic and medical purposes for just as long in these countries.

In recent years, the medicinal value of curcumin has been the subject of intense scientific studies, with publication numbering in the thousands, looking into the possible beneficial effects of this natural product on many kinds of affliction in humans.

This study published reports that topical curcumin gel applied soon after mild to moderate burns and scalds appears to be remarkably effective in relieving symptoms and improved healing of the affected skin.

“When taken by mouth, curcumin is very poorly absorbed into the body, and may not work as well,” notes Dr. Heng. “Nonetheless, our tests have shown that when the substance is used in a topical gel, the effect is notable.”

The author of the study believes that the effectiveness of curcumin gel on the skin – or topical curcumin – is related to its potent anti-inflammatory activity. Based on studies that she has done both in the laboratory and in patients over 25 years, the key to curcumin’s effectiveness on burns and scalds is that it is a natural inhibitor of an enzyme called phosphorylase kinase.

This enzyme in humans has many important functions, including its involvement in wound healing. Wound healing is the vital process that enables healing of tissues after injury. The process goes through a sequence of acute and chronic inflammatory events, during which there is redness, swelling, pain and then healing, often with scarring in the case of burns and scalds of the skin. The sequence is started by the release of phosphorylase kinase about 5 mins after injury, which activates over 200 genes that are involved in wound healing.

Dr. Heng uses curcumin gel for burns, scalds and other skin conditions as complementary treatment, in addition to standard treatment usually recommended for such conditions.

Caption: These are results from 5 days upon application of curcumin gel to burns, and results after 6 weeks. Credit: Dr. Madalene Heng

Here’s a link to and a citation for the paper,

Phosphorylase Kinase Inhibition Therapy in Burns and Scalds by Madalene Heng. BioDiscovery 20: e11207 (24 Feb 2017) https://doi.org/10.3897/biodiscovery.20.e1120

This paper is in an open access journal.

Mapping 23,000 atoms in a nanoparticle

Identification of the precise 3-D coordinates of iron, shown in red, and platinum atoms in an iron-platinum nanoparticle.. Courtesy of Colin Ophus and Florian Nickel/Berkeley Lab

The image of the iron-platinum nanoparticle (referenced in the headline) reminds of foetal ultrasound images. A Feb. 1, 2017 news item on ScienceDaily tells us more,

In the world of the very tiny, perfection is rare: virtually all materials have defects on the atomic level. These imperfections — missing atoms, atoms of one type swapped for another, and misaligned atoms — can uniquely determine a material’s properties and function. Now, UCLA [University of California at Los Angeles] physicists and collaborators have mapped the coordinates of more than 23,000 individual atoms in a tiny iron-platinum nanoparticle to reveal the material’s defects.

The results demonstrate that the positions of tens of thousands of atoms can be precisely identified and then fed into quantum mechanics calculations to correlate imperfections and defects with material properties at the single-atom level.

A Feb. 1, 2017 UCLA news release, which originated the news item, provides more detail about the work,

Jianwei “John” Miao, a UCLA professor of physics and astronomy and a member of UCLA’s California NanoSystems Institute, led the international team in mapping the atomic-level details of the bimetallic nanoparticle, more than a trillion of which could fit within a grain of sand.

“No one has seen this kind of three-dimensional structural complexity with such detail before,” said Miao, who is also a deputy director of the Science and Technology Center on Real-Time Functional Imaging. This new National Science Foundation-funded consortium consists of scientists at UCLA and five other colleges and universities who are using high-resolution imaging to address questions in the physical sciences, life sciences and engineering.

Miao and his team focused on an iron-platinum alloy, a very promising material for next-generation magnetic storage media and permanent magnet applications.

By taking multiple images of the iron-platinum nanoparticle with an advanced electron microscope at Lawrence Berkeley National Laboratory and using powerful reconstruction algorithms developed at UCLA, the researchers determined the precise three-dimensional arrangement of atoms in the nanoparticle.

“For the first time, we can see individual atoms and chemical composition in three dimensions. Everything we look at, it’s new,” Miao said.

The team identified and located more than 6,500 iron and 16,600 platinum atoms and showed how the atoms are arranged in nine grains, each of which contains different ratios of iron and platinum atoms. Miao and his colleagues showed that atoms closer to the interior of the grains are more regularly arranged than those near the surfaces. They also observed that the interfaces between grains, called grain boundaries, are more disordered.

“Understanding the three-dimensional structures of grain boundaries is a major challenge in materials science because they strongly influence the properties of materials,” Miao said. “Now we are able to address this challenge by precisely mapping out the three-dimensional atomic positions at the grain boundaries for the first time.”

The researchers then used the three-dimensional coordinates of the atoms as inputs into quantum mechanics calculations to determine the magnetic properties of the iron-platinum nanoparticle. They observed abrupt changes in magnetic properties at the grain boundaries.

“This work makes significant advances in characterization capabilities and expands our fundamental understanding of structure-property relationships, which is expected to find broad applications in physics, chemistry, materials science, nanoscience and nanotechnology,” Miao said.

In the future, as the researchers continue to determine the three-dimensional atomic coordinates of more materials, they plan to establish an online databank for the physical sciences, analogous to protein databanks for the biological and life sciences. “Researchers can use this databank to study material properties truly on the single-atom level,” Miao said.

Miao and his team also look forward to applying their method called GENFIRE (GENeralized Fourier Iterative Reconstruction) to biological and medical applications. “Our three-dimensional reconstruction algorithm might be useful for imaging like CT scans,” Miao said. Compared with conventional reconstruction methods, GENFIRE requires fewer images to compile an accurate three-dimensional structure.

That means that radiation-sensitive objects can be imaged with lower doses of radiation.

The US Dept. of Energy (DOE) Lawrence Berkeley National Laboratory issued their own Feb. 1, 2017 news release (also on EurekAlert) about the work with a focus on how their equipment made this breakthrough possible (it repeats a little of the info. from the UCLA news release),

Scientists used one of the world’s most powerful electron microscopes to map the precise location and chemical type of 23,000 atoms in an extremely small particle made of iron and platinum.

The 3-D reconstruction reveals the arrangement of atoms in unprecedented detail, enabling the scientists to measure chemical order and disorder in individual grains, which sheds light on the material’s properties at the single-atom level. Insights gained from the particle’s structure could lead to new ways to improve its magnetic performance for use in high-density, next-generation hard drives.

What’s more, the technique used to create the reconstruction, atomic electron tomography (which is like an incredibly high-resolution CT scan), lays the foundation for precisely mapping the atomic composition of other useful nanoparticles. This could reveal how to optimize the particles for more efficient catalysts, stronger materials, and disease-detecting fluorescent tags.

Microscopy data was obtained and analyzed by scientists from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) at the Molecular Foundry, in collaboration with Foundry users from UCLA, Oak Ridge National Laboratory, and the United Kingdom’s University of Birmingham. …

Atoms are the building blocks of matter, and the patterns in which they’re arranged dictate a material’s properties. These patterns can also be exploited to greatly improve a material’s function, which is why scientists are eager to determine the 3-D structure of nanoparticles at the smallest scale possible.

“Our research is a big step in this direction. We can now take a snapshot that shows the positions of all the atoms in a nanoparticle at a specific point in its growth. This will help us learn how nanoparticles grow atom by atom, and it sets the stage for a materials-design approach starting from the smallest building blocks,” says Mary Scott, who conducted the research while she was a Foundry user, and who is now a staff scientist. Scott and fellow Foundry scientists Peter Ercius and Colin Ophus developed the method in close collaboration with Jianwei Miao, a UCLA professor of physics and astronomy.

Their nanoparticle reconstruction builds on an achievement they reported last year in which they measured the coordinates of more than 3,000 atoms in a tungsten needle to a precision of 19 trillionths of a meter (19 picometers), which is many times smaller than a hydrogen atom. Now, they’ve taken the same precision, added the ability to distinguish different elements, and scaled up the reconstruction to include tens of thousands of atoms.

Importantly, their method maps the position of each atom in a single, unique nanoparticle. In contrast, X-ray crystallography and cryo-electron microscopy plot the average position of atoms from many identical samples. These methods make assumptions about the arrangement of atoms, which isn’t a good fit for nanoparticles because no two are alike.

“We need to determine the location and type of each atom to truly understand how a nanoparticle functions at the atomic scale,” says Ercius.

A TEAM approach

The scientists’ latest accomplishment hinged on the use of one of the highest-resolution transmission electron microscopes in the world, called TEAM I. It’s located at the National Center for Electron Microscopy, which is a Molecular Foundry facility. The microscope scans a sample with a focused beam of electrons, and then measures how the electrons interact with the atoms in the sample. It also has a piezo-controlled stage that positions samples with unmatched stability and position-control accuracy.

The researchers began growing an iron-platinum nanoparticle from its constituent elements, and then stopped the particle’s growth before it was fully formed. They placed the “partially baked” particle in the TEAM I stage, obtained a 2-D projection of its atomic structure, rotated it a few degrees, obtained another projection, and so on. Each 2-D projection provides a little more information about the full 3-D structure of the nanoparticle.

They sent the projections to Miao at UCLA, who used a sophisticated computer algorithm to convert the 2-D projections into a 3-D reconstruction of the particle. The individual atomic coordinates and chemical types were then traced from the 3-D density based on the knowledge that iron atoms are lighter than platinum atoms. The resulting atomic structure contains 6,569 iron atoms and 16,627 platinum atoms, with each atom’s coordinates precisely plotted to less than the width of a hydrogen atom.

Translating the data into scientific insights

Interesting features emerged at this extreme scale after Molecular Foundry scientists used code they developed to analyze the atomic structure. For example, the analysis revealed chemical order and disorder in interlocking grains, in which the iron and platinum atoms are arranged in different patterns. This has large implications for how the particle grew and its real-world magnetic properties. The analysis also revealed single-atom defects and the width of disordered boundaries between grains, which was not previously possible in complex 3-D boundaries.

“The important materials science problem we are tackling is how this material transforms from a highly randomized structure, what we call a chemically-disordered structure, into a regular highly-ordered structure with the desired magnetic properties,” says Ophus.

To explore how the various arrangements of atoms affect the nanoparticle’s magnetic properties, scientists from DOE’s Oak Ridge National Laboratory ran computer calculations on the Titan supercomputer at ORNL–using the coordinates and chemical type of each atom–to simulate the nanoparticle’s behavior in a magnetic field. This allowed the scientists to see patterns of atoms that are very magnetic, which is ideal for hard drives. They also saw patterns with poor magnetic properties that could sap a hard drive’s performance.

“This could help scientists learn how to steer the growth of iron-platinum nanoparticles so they develop more highly magnetic patterns of atoms,” says Ercius.

Adds Scott, “More broadly, the imaging technique will shed light on the nucleation and growth of ordered phases within nanoparticles, which isn’t fully theoretically understood but is critically important to several scientific disciplines and technologies.”

The folks at the Berkeley Lab have created a video (notice where the still image from the beginning of this post appears),

The Oak Ridge National Laboratory (ORNL), not wanting to be left out, has been mentioned in a Feb. 3, 2017 news item on ScienceDaily,

… researchers working with magnetic nanoparticles at the University of California, Los Angeles (UCLA), and the US Department of Energy’s (DOE’s) Lawrence Berkeley National Laboratory (Berkeley Lab) approached computational scientists at DOE’s Oak Ridge National Laboratory (ORNL) to help solve a unique problem: to model magnetism at the atomic level using experimental data from a real nanoparticle.

“These types of calculations have been done for ideal particles with ideal crystal structures but not for real particles,” said Markus Eisenbach, a computational scientist at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL.

A Feb. 2, 2017 ORNL news release on EurekAlert, which originated the news item, elucidates further on how their team added to the research,

Eisenbach develops quantum mechanical electronic structure simulations that predict magnetic properties in materials. Working with Paul Kent, a computational materials scientist at ORNL’s Center for Nanophase Materials Sciences, the team collaborated with researchers at UCLA and Berkeley Lab’s Molecular Foundry to combine world-class experimental data with world-class computing to do something new–simulate magnetism atom by atom in a real nanoparticle.

Using the new data from the research teams on the West Coast, Eisenbach and Kent were able to precisely model the measured atomic structure, including defects, from a unique iron-platinum (FePt) nanoparticle and simulate its magnetic properties on the 27-petaflop Titan supercomputer at the OLCF.

Electronic structure codes take atomic and chemical structure and solve for the corresponding magnetic properties. However, these structures are typically derived from many 2-D electron microscopy or x-ray crystallography images averaged together, resulting in a representative, but not true, 3-D structure.

“In this case, researchers were able to get the precise 3-D structure for a real particle,” Eisenbach said. “The UCLA group has developed a new experimental technique where they can tell where the atoms are–the coordinates–and the chemical resolution, or what they are — iron or platinum.”

The ORNL news release goes on to describe the work from the perspective of the people who ran the supercompute simulationsr,

A Supercomputing Milestone

Magnetism at the atomic level is driven by quantum mechanics — a fact that has shaken up classical physics calculations and called for increasingly complex, first-principle calculations, or calculations working forward from fundamental physics equations rather than relying on assumptions that reduce computational workload.

For magnetic recording and storage devices, researchers are particularly interested in magnetic anisotropy, or what direction magnetism favors in an atom.

“If the anisotropy is too weak, a bit written to the nanoparticle might flip at room temperature,” Kent said.

To solve for magnetic anisotropy, Eisenbach and Kent used two computational codes to compare and validate results.

To simulate a supercell of about 1,300 atoms from strongly magnetic regions of the 23,000-atom nanoparticle, they used the Linear Scaling Multiple Scattering (LSMS) code, a first-principles density functional theory code developed at ORNL.

“The LSMS code was developed for large magnetic systems and can tackle lots of atoms,” Kent said.

As principal investigator on 2017, 2016, and previous INCITE program awards, Eisenbach has scaled the LSMS code to Titan for a range of magnetic materials projects, and the in-house code has been optimized for Titan’s accelerated architecture, speeding up calculations more than 8 times on the machine’s GPUs. Exceptionally capable of crunching large magnetic systems quickly, the LSMS code received an Association for Computing Machinery Gordon Bell Prize in high-performance computing achievement in 1998 and 2009, and developments continue to enhance the code for new architectures.

Working with Renat Sabirianov at the University of Nebraska at Omaha, the team also ran VASP, a simulation package that is better suited for smaller atom counts, to simulate regions of about 32 atoms.

“With both approaches, we were able to confirm that the local VASP results were consistent with the LSMS results, so we have a high confidence in the simulations,” Eisenbach said.

Computer simulations revealed that grain boundaries have a strong effect on magnetism. “We found that the magnetic anisotropy energy suddenly transitions at the grain boundaries. These magnetic properties are very important,” Miao said.

In the future, researchers hope that advances in computing and simulation will make a full-particle simulation possible — as first-principles calculations are currently too intensive to solve small-scale magnetism for regions larger than a few thousand atoms.

Also, future simulations like these could show how different fabrication processes, such as the temperature at which nanoparticles are formed, influence magnetism and performance.

“There’s a hope going forward that one would be able to use these techniques to look at nanoparticle growth and understand how to optimize growth for performance,” Kent said.

Finally, here’s a link to and a citation for the paper,

Deciphering chemical order/disorder and material properties at the single-atom level by Yongsoo Yang, Chien-Chun Chen, M. C. Scott, Colin Ophus, Rui Xu, Alan Pryor, Li Wu, Fan Sun, Wolfgang Theis, Jihan Zhou, Markus Eisenbach, Paul R. C. Kent, Renat F. Sabirianov, Hao Zeng, Peter Ercius, & Jianwei Miao. Nature 542, 75–79 (02 February 2017) doi:10.1038/nature21042 Published online 01 February 2017

This paper is behind a paywall.

The mathematics of Disney’s ‘Moana’

The hit Disney movie “Moana” features stunning visual effects, including the animation of water to such a degree that it becomes a distinct character in the film. Courtesy of Walt Disney Animation Studios

Few people think to marvel over the mathematics when watching an animated feature but without mathematicians, the artists would not be able to achieve their artistic goals as a Jan. 4, 2017 news item on phys.org makes clear (Note: A link has been removed),

UCLA [University of California at Los Angeles] mathematics professor Joseph Teran, a Walt Disney consultant on animated movies since 2007, is under no illusion that artists want lengthy mathematics lessons, but many of them realize that the success of animated movies often depends on advanced mathematics.

“In general, the animators and artists at the studios want as little to do with mathematics and physics as possible, but the demands for realism in animated movies are so high,” Teran said. “Things are going to look fake if you don’t at least start with the correct physics and mathematics for many materials, such as water and snow. If the physics and mathematics are not simulated accurately, it will be very glaring that something is wrong with the animation of the material.”

Teran and his research team have helped infuse realism into several Disney movies, including “Frozen,” where they used science to animate snow scenes. Most recently, they applied their knowledge of math, physics and computer science to enliven the new 3-D computer-animated hit, “Moana,” a tale about an adventurous teenage girl who is drawn to the ocean and is inspired to leave the safety of her island on a daring journey to save her people.

A Jan. 3, 2017 UCLA news release, which originated the news item, explains in further nontechnical detail,

Alexey Stomakhin, a former UCLA doctoral student of Teran’s and Andrea Bertozzi’s, played an important role in the making of “Moana.” After earning his Ph.D. in applied mathematics in 2013, he became a senior software engineer at Walt Disney Animation Studios. Working with Disney’s effects artists, technical directors and software developers, Stomakhin led the development of the code that was used to simulate the movement of water in “Moana,” enabling it to play a role as one of the characters in the film.

“The increased demand for realism and complexity in animated movies makes it preferable to get assistance from computers; this means we have to simulate the movement of the ocean surface and how the water splashes, for example, to make it look believable,” Stomakhin explained. “There is a lot of mathematics, physics and computer science under the hood. That’s what we do.”

“Moana” has been praised for its stunning visual effects in words the mathematicians love hearing. “Everything in the movie looks almost real, so the movement of the water has to look real too, and it does,” Teran said. “’Moana’ has the best water effects I’ve ever seen, by far.”

Stomakhin said his job is fun and “super-interesting, especially when we cheat physics and step beyond physics. It’s almost like building your own universe with your own laws of physics and trying to simulate that universe.

“Disney movies are about magic, so magical things happen which do not exist in the real world,” said the software engineer. “It’s our job to add some extra forces and other tricks to help create those effects. If you have an understanding of how the real physical laws work, you can push parameters beyond physical limits and change equations slightly; we can predict the consequences of that.”

To make animated movies these days, movie studios need to solve, or nearly solve, partial differential equations. Stomakhin, Teran and their colleagues build the code that solves the partial differential equations. More accurately, they write algorithms that closely approximate the partial differential equations because they cannot be solved perfectly. “We try to come up with new algorithms that have the highest-quality metrics in all possible categories, including preserving angular momentum perfectly and preserving energy perfectly. Many algorithms don’t have these properties,” Teran said.

Stomakhin was also involved in creating the ocean’s crashing waves that have to break at a certain place and time. That task required him to get creative with physics and use other tricks. “You don’t allow physics to completely guide it,” he said.  “You allow the wave to break only when it needs to break.”

Depicting boats on waves posed additional challenges for the scientists.

“It’s easy to simulate a boat traveling through a static lake, but a boat on waves is much more challenging to simulate,” Stomakhin said. “We simulated the fluid around the boat; the challenge was to blend that fluid with the rest of the ocean. It can’t look like the boat is splashing in a little swimming pool — the blend needs to be seamless.”

Stomakhin spent more than a year developing the code and understanding the physics that allowed him to achieve this effect.

“It’s nice to see the great visual effect, something you couldn’t have achieved if you hadn’t designed the algorithm to solve physics accurately,” said Teran, who has taught an undergraduate course on scientific computing for the visual-effects industry.

While Teran loves spectacular visual effects, he said the research has many other scientific applications as well. It could be used to simulate plasmas, simulate 3-D printing or for surgical simulation, for example. Teran is using a related algorithm to build virtual livers to substitute for the animal livers that surgeons train on. He is also using the algorithm to study traumatic leg injuries.

Teran describes the work with Disney as “bread-and-butter, high-performance computing for simulating materials, as mechanical engineers and physicists at national laboratories would. Simulating water for a movie is not so different, but there are, of course, small tweaks to make the water visually compelling. We don’t have a separate branch of research for computer graphics. We create new algorithms that work for simulating wide ranges of materials.”

Teran, Stomakhin and three other applied mathematicians — Chenfanfu Jiang, Craig Schroeder and Andrew Selle — also developed a state-of-the-art simulation method for fluids in graphics, called APIC, based on months of calculations. It allows for better realism and stunning visual results. Jiang is a UCLA postdoctoral scholar in Teran’s laboratory, who won a 2015 UCLA best dissertation prize.  Schroeder is a former UCLA postdoctoral scholar who worked with Teran and is now at UC Riverside. Selle, who worked at Walt Disney Animation Studios, is now at Google.

Their newest version of APIC has been accepted for publication by the peer-reviewed Journal of Computational Physics.

“Alexey is using ideas from high-performance computing to make movies,” Teran said, “and we are contributing to the scientific community by improving the algorithm.”

Unfortunately, the paper does not seem to have been published early online so I cannot offer a link.

Final comment, it would have been interesting to have had a comment from one of the film’s artists or animators included in the article but it may not have been possible due to time or space constraints.

Wearable microscopes

It never occurred to me that someone might want a wearable microscope but, apparently, there is a need. A Sept. 27, 2016 news item on phys.org,

UCLA [University of California at Los Angeles] researchers working with a team at Verily Life Sciences have designed a mobile microscope that can detect and monitor fluorescent biomarkers inside the skin with a high level of sensitivity, an important tool in tracking various biochemical reactions for medical diagnostics and therapy.

A Sept. 26, 2016 UCLA news release by Meghan Steele Horan, which originated the news item, describes the work in more detail,

This new system weighs less than a one-tenth of a pound, making it small and light enough for a person to wear around their bicep, among other parts of their body. In the future, technology like this could be used for continuous patient monitoring at home or at point-of-care settings.

The research, which was published in the journal ACS Nano, was led by Aydogan Ozcan, UCLA’s Chancellor’s Professor of Electrical Engineering and Bioengineering and associate director of the California NanoSystems Institute and Vasiliki Demas of Verily Life Sciences (formerly Google Life Sciences).

Fluorescent biomarkers are routinely used for cancer detection and drug delivery and release among other medical therapies. Recently, biocompatible fluorescent dyes have emerged, creating new opportunities for noninvasive sensing and measuring of biomarkers through the skin.

However, detecting artificially added fluorescent objects under the skin is challenging. Collagen, melanin and other biological structures emit natural light in a process called autofluorescence. Various methods have been tried to investigate this problem using different sensing systems. Most are quite expensive and difficult to make small and cost-effective enough to be used in a wearable imaging system.

To test the mobile microscope, researchers first designed a tissue phantom — an artificially created material that mimics human skin optical properties, such as autofluorescence, absorption and scattering. The target fluorescent dye solution was injected into a micro-well with a volume of about one-hundredth of a microliter, thinner than a human hair, and subsequently implanted into the tissue phantom half a millimeter to 2 millimeters from the surface — which would be deep enough to reach blood and other tissue fluids in practice.

To measure the fluorescent dye, the wearable microscope created by Ozcan and his team used a laser to hit the skin at an angle. The fluorescent image at the surface of the skin was captured via the wearable microscope. The image was then uploaded to a computer where it was processed using a custom-designed algorithm, digitally separating the target fluorescent signal from the autofluorescence of the skin, at a very sensitive parts-per-billion level of detection.

“We can place various tiny bio-sensors inside the skin next to each other, and through our imaging system, we can tell them apart,” Ozcan said. “We can monitor all these embedded sensors inside the skin in parallel, even understand potential misalignments of the wearable imager and correct it to continuously quantify a panel of biomarkers.”

This computational imaging framework might also be used in the future to continuously monitor various chronic diseases through the skin using an implantable or injectable fluorescent dye.

Here’s a link to and a citation for the paper,

Quantitative Fluorescence Sensing Through Highly Autofluorescent, Scattering, and Absorbing Media Using Mobile Microscopy by Zoltán Göröcs, Yair Rivenson, Hatice Ceylan Koydemir, Derek Tseng, Tamara L. Troy, Vasiliki Demas, and Aydogan Ozcan. ACS Nano, 2016, 10 (9), pp 8989–8999 DOI: 10.1021/acsnano.6b05129 Publication Date (Web): September 13, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Faster predictive toxicology of nanomaterials

As more nanotechnology-enabled products make their way to the market and concerns rise regarding safety, scientists work to find better ways of assessing and predicting the safety of these materials, from an Aug. 13, 2016 news item on Nanowerk,

UCLA [University of California at Los Angeles] researchers have designed a laboratory test that uses microchip technology to predict how potentially hazardous nanomaterials could be.

According to UCLA professor Huan Meng, certain engineered nanomaterials, such as non-purified carbon nanotubes that are used to strengthen commercial products, could have the potential to injure the lungs if inhaled during the manufacturing process. The new test he helped develop could be used to analyze the extent of the potential hazard.

An Aug. 12, 2016 UCLA news release, which originated the news item, expands on the theme,

The same test could also be used to identify biological biomarkers that can help scientists and doctors detect cancer and infectious diseases. Currently, scientists identify those biomarkers using other tests; one of the most common is called enzyme-linked immunosorbent assay, or ELISA. But the new platform, which is called semiconductor electronic label-free assay, or SELFA, costs less and is faster and more accurate, according to research published in the journal Scientific Reports.

The study was led by Meng, a UCLA assistant adjunct professor of medicine, and Chi On Chui, a UCLA associate professor of electrical engineering and bioengineering.

ELISA has been used by scientists for decades to analyze biological samples — for example, to detect whether epithelial cells in the lungs that have been exposed to nanomaterials are inflamed. But ELISA must be performed in a laboratory setting by skilled technicians, and a single test can cost roughly $700 and take five to seven days to process.

In contrast, SELFA uses microchip technology to analyze samples. The test can take between 30 minutes and two hours and, according to the UCLA researchers, could cost just a few dollars per sample when high-volume production begins.

The SELFA chip contains a T-shaped nanowire that acts as an integrated sensor and amplifier. To analyze a sample, scientists place it on a sensor on the chip. The vertical part of the T-shaped nanowire converts the current from the molecule being analyzed, and the horizontal portion amplifies that signal to distinguish the molecule from others.

The use of the T-shaped nanowires created in Chui’s lab is a new application of a UCLA patented invention that was developed by Chui and his colleagues. The device is the first time that “lab-on-a-chip” analysis has been tested in a scenario that mimics a real-life situation.

The UCLA scientists exposed cultured lung cells to different nanomaterials and then compared their results using SELFA with results in a database of previous studies that used other testing methods.

“By measuring biomarker concentrations in the cell culture, we showed that SELFA was 100 times more sensitive than ELISA,” Meng said. “This means that not only can SELFA analyze much smaller sample sizes, but also that it can minimize false-positive test results.”

Chui said, “The results are significant because SELFA measurement allows us to predict the inflammatory potential of a range of nanomaterials inside cells and validate the prediction with cellular imaging and experiments in animals’ lungs.”

Here’s a link to and a citation for the paper,

Semiconductor Electronic Label-Free Assay for Predictive Toxicology by Yufei Mao, Kyeong-Sik Shin, Xiang Wang, Zhaoxia Ji, Huan Meng, & Chi On Chui. Scientific Reports 6, Article number: 24982 (2016) doi:10.1038/srep24982 Published online: 27 April 2016

This paper is open access.

Cutting into a cell with a nanoblade

A May 11, 2016 news item on Nanotechnology Now features a type of surgery that could aid in cell engineering,

To study certain aspects of cells, researchers need the ability to take the innards out, manipulate them, and put them back. Options for this kind of work are limited, but researchers reporting May 10 [2016] in Cell Metabolism describe a “nanoblade” that can slice through a cell’s membrane to insert mitochondria. The researchers have previously used this technology to transfer other materials between cells and hope to commercialize the nanoblade for wider use in bioengineering.

Caption: This diagram illustrates the process of transferring mitochondria between cells using the nanoblade technology. Credit: Alexander N. Patananan Courtesy UCLA

Caption: This diagram illustrates the process of transferring mitochondria between cells using the nanoblade technology.
Credit: Alexander N. Patananan Courtesy UCLA

A May 10, 2016 Cell Press news release on EurekAlert, which originated the news item, expands on the theme,

“As a new tool for cell engineering, to truly engineer cells for health purposes and research, I think this is very unique,” says Mike Teitell, a pathologist and bioengineer at the University of California, Los Angeles (UCLA). “We haven’t run into anything so far, up to a few microns in size, that we can’t deliver.”

Teitell and Pei-Yu “Eric” Chiou, also a bioengineer at UCLA, first conceived the idea of a nanoblade several years ago to transfer a nucleus from one cell to another. However, they soon delved into the intersection of stem cell biology and energy metabolism, where the technology could be used to manipulate a cell’s mitochondria. Studying the effects of mutations in the mitochondrial genome, which can cause debilitating or fatal diseases in humans, is tricky for a number of reasons.

“There’s a bottleneck in the field for modifying a cell’s mitochondrial DNA,” says Teitell. “So we are working on a two-step process: edit the mitochondrial genome outside of a cell, and then take those manipulated mitochondria and put them back into the cell. We’re still working on the first step, but we’ve solved that second one quite well.”

The nanoblade apparatus consists of a microscope, laser, and titanium-coated micropipette to act as the “blade,” operated using a joystick controller. When a laser pulse strikes the titanium, the metal heats up, vaporizing the surrounding water layers in the culture media and forming a bubble next to a cell. Within a microsecond, the bubble expands, generating a local force that punctures the cell membrane and creates a passageway several microns long that the “cargo”–in this case, mitochondria–can be pushed through. The cell then rapidly repairs the membrane defect.

Teitell, Chiou, and their team used the nanoblade to insert tagged mitochondria from human breast cancer cells and embryonic kidney cells into cells without mitochondrial DNA. When they sequenced the nuclear and mitochondrial DNA afterwards, the researchers saw that the mitochondria had been successfully transferred and replicated by 2% of the cells, with a range of functionality. Other methods of mitochondrial transfer are hard to control, and when they have been reported to work, the success rates have been only 0.0001%-0.5% according to the researchers.

“The success of the mitochondrial transfer was very encouraging,” says Chiou. “The most exciting application for the nanoblade, to me, is in the study of mitochondria and infectious diseases. This technology brings new capabilities to help advance these fields.”

The team’s aspirations also go well beyond mitochondria, and they’ve already scaled up the nanoblade apparatus into an automated high-throughput version. “We want to make a platform that’s easy to use for everyone and allow researchers to devise anything they can think of a few microns or smaller that would be helpful for their research–whether that’s inserting antibodies, pathogens, synthetic materials, or something else that we haven’t imagined,” says Teitell. “It would be very cool to allow people to do something that they can’t do right now.”

The pipette being used is measured at the microscale but it’s called a nanoblade? Well, perhaps the tip or the edge of the pipette is measured at the nanoscale.

Getting back to the research, here’s a link to and a citation for the paper,

Mitochondrial Transfer by Photothermal Nanoblade Restores Metabolite Profile in Mammalian Cells by Ting-Hsiang Wu, Enrico Sagullo, Dana Case, Xin Zheng, Yanjing Li, Jason S. Hong, Tara TeSlaa, Alexander N. Patananan, J. Michael McCaffery, Kayvan Niazi, Daniel Braas, Carla M. Koehler, Thomas G. Graeber, Pei-Yu Chiou, Michael A. Teitell. Cell Metabolism Volume 23, Issue 5, p921–929, 10 May 2016  DOI: http://dx.doi.org/10.1016/j.cmet.2016.04.007

This paper appears to be open access.

Smaller (20nm vs 110nm) silver nanoparticles are more likely to absorbed by fish

An Oct. 8, 2015 news item on Nanowerk offers some context for why researchers at the University of California at Los Angeles (UCLA) are studying silver nanoparticles and their entry into the water system,

More than 2,000 consumer products today contain nanoparticles — particles so small that they are measured in billionths of a meter.

Manufacturers use nanoparticles to help sunscreen work better against the sun’s rays and to make athletic apparel better at wicking moisture away from the body, among many other purposes.

Of those products, 462 — ranging from toothpaste to yoga mats — contain nanoparticles made from silver, which are used for their ability to kill bacteria. But that benefit might be coming at a cost to the environment. In many cases, simply using the products as intended causes silver nanoparticles to wind up in rivers and other bodies of water, where they can be ingested by fish and interact with other marine life.

For scientists, a key question has been to what extent organisms retain those particles and what effects they might have.

I’d like to know where they got those numbers “… 2,000 consumer products …” and “… 462 — ranging from toothpaste to yoga mats — contain nanoparticles made from silver… .”

Getting back to the research, an Oct. 7, 2015 UCLA news release, which originated the news item, describes the work in more detail,

A new study by the University of California Center for Environmental Implications of Nanotechnology has found that smaller silver nanoparticles were more likely to enter fish’s bodies, and that they persisted longer than larger silver nanoparticles or fluid silver nitrate. The study, published online in the journal ACS Nano, was led by UCLA postdoctoral scholars Olivia Osborne and Sijie Lin, and Andre Nel, director of UCLA’s Center for Environmental Implications of Nanotechnology and associate director of the California NanoSystems Institute at UCLA.

Nel said that although it is not yet known whether silver nanoparticles are harmful, the research team wanted to first identify whether they were even being absorbed by fish. CEIN, which is funded by the National Science Foundation, is focused on studying the effects of nanotechnology on the environment.

In the study, researchers placed zebrafish in water that contained fluid silver nitrate and two sizes of silver nanoparticles — some measuring 20 nanometers in diameter and others 110 nanometers. Although the difference in size between these two particles is so minute that it can only be seen using high-powered transmission electron microscopes, the researchers found that the two sizes of particles affected the fish very differently.

The researchers used zebrafish in the study because they have some genetic similarities to humans, their embryos and larvae are transparent (which makes them easier to observe). In addition, they tend to absorb chemicals and other substances from water.

Osborne said the team focused its research on the fish’s gills and intestines because they are the organs most susceptible to silver exposure.

“The gills showed a significantly higher silver content for the 20-nanometer than the 110-nanometer particles, while the values were more similar in the intestines,” she said, adding that both sizes of the silver particles were retained in the intestines even after the fish spent seven days in clean water. “The most interesting revelation was that the difference in size of only 90 nanometers made such a striking difference in the particles’ demeanor in the gills and intestines.”

The experiment was one of the most comprehensive in vivo studies to date on silver nanoparticles, as well as the first to compare silver nanoparticle toxicity by extent of organ penetration and duration with different-sized particles, and the first to demonstrate a mechanism for the differences.

Osborne said the results seem to indicate that smaller particles penetrated deeper into the fishes’ organs and stayed there longer because they dissolve faster than the larger particles and are more readily absorbed by the fish.

Lin said the results indicate that companies using silver nanoparticles have to strike a balance that recognizes their benefits and their potential as a pollutant. Using slightly larger nanoparticles might help make them somewhat safer, for example, but it also might make the products in which they’re used less effective.

He added that data from the study could be translated to understand how other nanoparticles could be used in more environmentally sustainable ways.

Nel said the team’s next step is to determine whether silver particles are potentially harmful. “Our research will continue in earnest to determine what the long-term effects of this exposure can be,” he said.

Here’s an image illustrating the findings,

Courtesy ACS Nano

Courtesy ACS Nano

Here’s a link to and a citation for the paper,

Organ-Specific and Size-Dependent Ag Nanoparticle Toxicity in Gills and Intestines of Adult Zebrafish by Olivia J. Osborne, Sijie Lin, Chong Hyun Chang, Zhaoxia Ji, Xuechen Yu, Xiang Wang, Shuo Lin, Tian Xia, and André E. Nel. ACS Nano, Article ASAP DOI: 10.1021/acsnano.5b04583 Publication Date (Web): September 1, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

Observing photo exposure one nanoscale grain at a time

A June 9, 2015 news item on Nanotechnology Now highlights research into a common phenomenon, photographic exposure,

Photoinduced chemical reactions are responsible for many fundamental processes and technologies, from energy conversion in nature to micro fabrication by photo-lithography. One process that is known from everyday’s life and can be observed by the naked eye, is the exposure of photographic film. At DESY’s [Deutsches Elektronen-Synchrotron] X-ray light source PETRA III, scientists have now monitored the chemical processes during a photographic exposure at the level of individual nanoscale grains in real-time. The advanced experimental method enables the investigation of a broad variety of chemical and physical processes in materials with millisecond temporal resolution, ranging from phase transitions to crystal growth. The research team lead by Prof. Jianwei (John) Miao from the University of California in Los Angeles and Prof. Tim Salditt from the University of Göttingen report their technique and observations in the journal Nature Materials.

A June 9, 2015 DESY press release (also on EurekAlert), which originated the news item, provides more detail about the research,

The researchers investigated a photographic paper (Kodak linagraph paper Type 2167 or “yellow burn paper”) that is often used to determine the position of the beam at X-ray experiments. “The photographic paper we looked at is not specially designed for X-rays. It works by changing its colour on exposure to light or X-rays,” explains DESY physicist Dr. Michael Sprung, head of the PETRA III beamline P10 where the experiments took place.

The X-rays were not only used to expose the photographic paper, but also to analyse changes of its inner composition at the same time. The paper carries a photosensitive film of a few micrometre thickness, consisting of tiny silver bromide grains dispersed in a gelatine matrix, and with an average size of about 700 nanometres. A nanometre is a millionth of a millimetre. When X-rays impinge onto such a crystalline grain, they are diffracted in a characteristic way, forming a unique pattern on the detector that reveals properties like crystal lattice spacing, chemical composition and orientation. “We could observe individual silver bromide grains within the ‘burn’ paper since the X-ray beam had a size of only 270 by 370 nanometres – smaller than the average grain,” says Salditt, who is a partner of DESY in the construction and operation of the GINIX (Göttingen Instrument for Nano-Imaging with X-Rays) at beamline P10.

The X-ray exposure starts the photolysis from silver bromide to produce silver. An absorbed X-ray photon can create many photolytic silver atoms, which grow and agglomerate at the surface and inside the silver bromide grain. The scientists observed how the silver bromide grains were strained, began to turn in the gelatine matrix and broke up into smaller crystallites as well as the growth of pure silver nano grains. The exceptionally bright beam of PETRA III together with a high-speed detector enabled the ‘filming’ of the process with up to five milliseconds temporal resolution. “We observed, for the first time, grain rotation and lattice deformation during photoinduced chemical reactions,” emphasises Miao. “We were actually surprised how fast some of these single grains rotate,” adds Sprung. “Some spin almost one time every two seconds.”

“As advanced synchrotron light sources are currently under rapid development in the US, Europe and Asia,” the authors anticipate that “in situ X-ray nanodiffraction, which enables to measure atomic resolution diffraction patterns with several millisecond temporal resolution, can be broadly applied to investigate phase transitions, chemical reactions, crystal growth, grain boundary dynamics, lattice expansion, and contraction in materials science, nanoscience, physics, and chemistry.”

Here’s a link to and a citation for the paper,

Grain rotation and lattice deformation during photoinduced chemical reactions revealed by in situ X-ray nanodiffraction by Zhifeng Huang, Matthias Bartels, Rui Xu, Markus Osterhoff, Sebastian Kalbfleisch, Michael Sprung, Akihiro Suzuki, Yukio Takahashi, Thomas N. Blanton, Tim Salditt, & Jianwei Miao. Nature Materials (2015) doi:10.1038/nmat4311 Published online 08 June 2015

This paper is behind a paywall.

Combining the best qualities of batteries and supercapacitors at the University of California at Los Angeles (UCLA)

There’s a reason why I’ve been feeling impatient about batteries and supercapacitors according to an April 2, 2015 news item on Nanowerk,

The dramatic rise of smartphones, tablets, laptops and other personal and portable electronics has brought battery technology to the forefront of electronics research. Even as devices have improved by leaps and bounds, the slow pace of battery development has held back technological progress.

Now, researchers at UCLA’s California NanoSystems Institute have successfully combined two nanomaterials to create a new energy storage medium that combines the best qualities of batteries and supercapacitors.

An April 1, 2015 UCLA news release, which originated the news item, describes the challenge and how the scientists addressed it (Note: A link has been removed),

Supercapacitors are electrochemical components that can charge in seconds rather than hours and can be used for 1 million recharge cycles. Unlike batteries, however, they do not store enough power to run our computers and smartphones.

The new hybrid supercapacitor stores large amounts of energy, recharges quickly and can last for more than 10,000 recharge cycles. The CNSI scientists also created a microsupercapacitor that is small enough to fit in wearable or implantable devices. Just one-fifth the thickness of a sheet of paper, it is capable of holding more than twice as much charge as a typical thin-film lithium battery.

The study, led by Richard Kaner, distinguished professor of chemistry and biochemistry and materials science and engineering, and Maher El-Kady, a postdoctoral scholar, was published in the Proceedings of the National Academy of Sciences.

“The microsupercapacitor is a new evolving configuration, a very small rechargeable power source with a much higher capacity than previous lithium thin-film microbatteries,” El-Kady said.

The new components combine laser-scribed graphene, or LSG — a material that can hold an electrical charge, is very conductive, and charges and recharges very quickly — with manganese dioxide, which is currently used in alkaline batteries because it holds a lot of charge and is cheap and plentiful. They can be fabricated without the need for extreme temperatures or the expensive “dry rooms” required to produce today’s supercapacitors.

“Let’s say you wanted to put a small amount of electrical current into an adhesive bandage for drug release or healing assistance technology,” Kaner said. “The microsupercapacitor is so thin you could put it inside the bandage to supply the current. You could also recharge it quickly and use it for a very long time.”

The researchers found that the supercapacitor could quickly store electrical charge generated by a solar cell during the day, hold the charge until evening and then power an LED overnight, showing promise for off-grid street lighting.

“The LSG–manganese-dioxide capacitors can store as much electrical charge as a lead acid battery, yet can be recharged in seconds, and they store about six times the capacity of state-of-the-art commercially available supercapacitors,” Kaner said. “This scalable approach for fabricating compact, reliable, energy-dense supercapacitors shows a great deal of promise in real-world applications, and we’re very excited about the possibilities for greatly improving personal electronics technology in the near future.”

Here’s a link to and a citation for the paper,

Engineering three-dimensional hybrid supercapacitors and microsupercapacitors for high-performance integrated energy storage by Maher F. El-Kady, Melanie Ihns, Mengping Li, Jee Youn Hwang, Mir F. Mousavi, Lindsay Chaney, Andrew T. Lech, and Richard B. Kaner. Published online before print March 23, 2015, doi: 10.1073/pnas.1420398112 PNAS March 23, 2015

This paper is behind a paywall.

One last bit, Dexter Johnson in an April 3, 2015 post on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides some insight into the research,

The story of graphene in supercapacitors can be represented by the old adage: its greatest strength is its greatest weakness. Of course, the name of the game in supercapacitor energy density is surface area. The greater the surface area, the greater number of ions you can store on the electrodes. While graphene has a theoretical surface area of 2630 square meters per gram, this density is only possible with a single, standalone graphene sheet.

But you can’t actually use a standalone sheet for the electrode of a supercapacitor because it will result in a very low volumetric capacitance. ….

So, while the 2-D characteristic of graphene may limit its usable surface area for supercapacitors, it does offer a way to make supercapacitors with small dimensions, something that would be impossible with activated carbon.

It is this strength that the CNSI researchers are aiming to exploit in their supercapacitor, which is small enough to be used as a wearable or implantable device. …

I recommend reading Dexter’s post in its entirety.