Tag Archives: Michael Faraday

A solution to the problem of measuring nanoparticles

As you might expect from the US National Institute of Standards and Technology (NIST) this research concerns techniques for measurements. From an August 15, 2019 news item on Nanowerk (Note: Links have been removed),

Tiny nanoparticles play a gargantuan role in modern life, even if most consumers are unaware of their presence. They provide essential ingredients in sunscreen lotions, prevent athlete’s foot fungus in socks, and fight microbes on bandages. They enhance the colors of popular candies and keep the powdered sugar on doughnuts powdery. They are even used in advanced drugs that target specific types of cells in cancer treatments.

When chemists analyze a sample, however, it is challenging to measure the sizes and quantities of these particles — which are often 100,000 times smaller than the thickness of a piece of paper. Technology offers many options for assessing nanoparticles, but experts have not reached a consensus on which technique is best.

In a new paper from the National Institute of Standards and Technology (NIST) and collaborating institutions, researchers have concluded that measuring the range of sizes in nanoparticles — instead of just the average particle size — is optimal for most applications.

An August 14, 2019 NIST news release (also received via email and on EurkAlert), which originated the news item, delves further into the research,

“It seems like a simple choice,” said NIST’s Elijah Petersen, the lead author of the paper, which was published today in Environmental Science: Nano. “But it can have a big impact on the outcome of your assessment.”

As with many measurement questions, precision is key. Exposure to a certain amount of some nanoparticles could have adverse effects. Pharmaceutical researchers often need exactitude to maximize a drug’s efficacy. And environmental scientists need to know, for example, how many nanoparticles of gold, silver or titanium could potentially cause a risk to organisms in soil or water.

Using more nanoparticles than needed in a product because of inconsistent measurements could also waste money for manufacturers.

Although they might sound ultramodern, nanoparticles are neither new nor based solely on high-tech manufacturing processes. A nanoparticle is really just a submicroscopic particle that measures less than 100 nanometers on at least one of its dimensions. It would be possible to place hundreds of thousands of them onto the head of a pin. They are exciting to researchers because many materials act differently at the nanometer scale than they do at larger scales, and nanoparticles can be made to do lots of useful things.

Nanoparticles have been in use since the days of ancient Mesopotamia [emphasis mine], when ceramic artists used extremely small bits of metal to decorate vases and other vessels. In fourth-century Rome, glass artisans ground metal into tiny particles to change the color of their wares under different lighting. These techniques were forgotten for a while but rediscovered in the 1600s by resourceful manufacturers for glassmaking [emphasis mine] again. Then, in the 1850s, scientist Michael Faraday extensively researched ways to use various kinds of wash mixes to change the performance of gold particles.

Modern nanoparticle research advanced quickly in the mid-20th century due to technological innovations in optics. Being able to see the individual particles and study their behavior expanded the possibilities for experimentation. The largest advances came, however, after experimental nanotechnology took off in the 1990s. Suddenly, the behavior of single particles of gold and many other substances could be closely examined and manipulated. Discoveries about the ways that small amounts of a substance would reflect light, absorb light, or change in behavior were numerous, leading to the incorporation of nanoparticles into many more products

Debates have since followed about their measurement. When assessing the response of cells or organisms to nanoparticles, some researchers prefer measuring particle number concentrations (sometimes called PNCs by scientists). Many find PNCs challenging since extra formulas must be employed when determining the final measurement. Others prefer measuring mass or surface area concentrations.

PNCs are often used for characterizing metals in chemistry. The situation for nanoparticles is inherently more complex, however, than it is for dissolved organic or inorganic substances because unlike dissolved chemicals, nanoparticles can come in a wide variety of sizes and sometimes stick together when added to testing materials.

“If you have a dissolved chemical, it’s always going to have the same molecular formula, by definition,” Petersen says. “Nanoparticles don’t just have a certain number of atoms, however. Some will be 9 nanometers, some will be 11, some might be 18, and some might be 3.”

The problem is that each of those particles may be fulfilling an important role. While a simple estimate of particle number is perfectly fine for some industrial applications, therapeutic applications require much more robust measurement. In the case of cancer therapies, for example, each particle, no matter how big or small, may be delivering a needed antidote. And just as with any other kind of dosage, nanoparticle dosage must be exact in order to be safe and effective.

Using the range of particle sizes to calculate the PNC will often be the most helpful in most cases, said Petersen. The size distribution doesn’t use a mean or an average but notes the complete distribution of sizes of particles so that formulas can be used to effectively discover how many particles are in a sample.

But no matter which approach is used, researchers need to make note of it in their papers, for the sake of comparability with other studies. “Don’t assume that different approaches will give you the same result,” he said.

Petersen adds that he and his colleagues were surprised by how much the coatings on nanoparticles could impact measurement. Some coatings, he noted, can have a positive electrical charge, causing clumping.

Petersen worked in collaboration with researchers from federal laboratories in Switzerland, and with scientists from 3M who have previously made many nanoparticle measurements for use in industrial settings. Researchers from Switzerland, like those in much of the rest of Europe, are keen to learn more about measuring nanoparticles because PNCs are required in many regulatory situations. There hasn’t been much information on which techniques are best or more likely to yield the most precise results across many applications.

“Until now we didn’t even know if we could find agreement among labs about particle number concentrations,” Petersen says. “They are complex. But now we are beginning to see it can be done.”

I love the reference to glassmaking and ancient Mesopotamia. Getting back to current times, here’s a link to and a citation for the paper,

Determining what really counts: modeling and measuring nanoparticle number concentrations by Elijah J. Petersen, Antonio R. Montoro Bustos, Blaza Toman, Monique E. Johnson, Mark Ellefson, George C. Caceres, Anna Lena Neuer, Qilin Chan, Jonathan W. Kemling, Brian Mader, Karen Murphy and Matthias Roesslein. Environmental Science: Nano. Published August 14, 2019. DOI: 10.1039/c9en00462a

This paper is behind a paywall.

Faster diagnostics with nanoparticles and magnetic phenomenon discovered 170 years ago

A Jan. 19, 2017 news item on ScienceDaily announces some new research from the University of Central Florida (UCF),

A UCF researcher has combined cutting-edge nanoscience with a magnetic phenomenon discovered more than 170 years ago to create a method for speedy medical tests.

The discovery, if commercialized, could lead to faster test results for HIV, Lyme disease, syphilis, rotavirus and other infectious conditions.

“I see no reason why a variation of this technique couldn’t be in every hospital throughout the world,” said Shawn Putnam, an assistant professor in the University of Central Florida’s College of Engineering & Computer Science.

A Jan. 19, 2017 UCF news release by Mark Schlueb, which originated the news item,  provides more technical detail,

At the core of the research recently published in the academic journal Small are nanoparticles – tiny particles that are one-billionth of a meter. Putnam’s team coated nanoparticles with the antibody to BSA, or bovine serum albumin, which is commonly used as the basis of a variety of diagnostic tests.

By mixing the nanoparticles in a test solution – such as one used for a blood test – the BSA proteins preferentially bind with the antibodies that coat the nanoparticles, like a lock and key.

That reaction was already well known. But Putnam’s team came up with a novel way of measuring the quantity of proteins present. He used nanoparticles with an iron core and applied a magnetic field to the solution, causing the particles to align in a particular formation. As proteins bind to the antibody-coated particles, the rotation of the particles becomes sluggish, which is easy to detect with laser optics.

The interaction of a magnetic field and light is known as Faraday rotation, a principle discovered by scientist Michael Faraday in 1845. Putnam adapted it for biological use.

“It’s an old theory, but no one has actually applied this aspect of it,” he said.

Other antigens and their unique antibodies could be substituted for the BSA protein used in the research, allowing medical tests for a wide array of infectious diseases.

The proof of concept shows the method could be used to produce biochemical immunology test results in as little as 15 minutes, compared to several hours for ELISA, or enzyme-linked immunosorbent assay, which is currently a standard approach for biomolecule detection.

Here’s a link to and a citation for the paper,

High-Throughput, Protein-Targeted Biomolecular Detection Using Frequency-Domain Faraday Rotation Spectroscopy by Richard J. Murdock, Shawn A. Putnam, Soumen Das, Ankur Gupta, Elyse D. Z. Chase, and Sudipta Seal. Small DOI: 10.1002/smll.201602862 Version of Record online: 16 JAN 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

How does ice melt? Layer by layer!

A Dec. 12, 2016 news item on ScienceDaily announces the answer to a problem scientists have been investigating for over a century but first, here are the questions,

We all know that water melts at 0°C. However, 150 years ago the famous physicist Michael Faraday discovered that at the surface of frozen ice, well below 0°C, a thin film of liquid-like water is present. This thin film makes ice slippery and is crucial for the motion of glaciers.

Since Faraday’s discovery, the properties of this water-like layer have been the research topic of scientists all over the world, which has entailed considerable controversy: at what temperature does the surface become liquid-like? How does the thickness of the layer dependent on temperature? How does the thickness of the layer increases with temperature? Continuously? Stepwise? Experiments to date have generally shown a very thin layer, which continuously grows in thickness up to 45 nm right below the bulk melting point at 0°C. This also illustrates why it has been so challenging to study this layer of liquid-like water on ice: 45 nm is about 1/1000th part of a human hair and is not discernible by eye.

Scientists of the Max Planck Institute for Polymer Research (MPI-P), in a collaboration with researchers from the Netherlands, the USA and Japan, have succeeded to study the properties of this quasi-liquid layer on ice at the molecular level using advanced surface-specific spectroscopy and computer simulations. The results are published in the latest edition of the scientific journal Proceedings of the National Academy of Science (PNAS).

Caption: Ice melts as described in the text layer by layer. Credit: © MPIP

A Dec. 12, 2016 Max Planck Institute for Polymer Research press release (also on EurekAlert), which originated the news item, goes on to answer the questions,

The team of scientists around Ellen Backus, group leader at MPI-P, investigated how the thin liquid layer is formed on ice, how it grows with increasing temperature, and if it is distinguishable from normal liquid water. These studies required well-defined ice crystal surfaces. Therefore much effort was put into creating ~10 cm large single crystals of ice, which could be cut in such a way that the surface structure was precisely known. To investigate whether the surface was solid or liquid, the team made use of the fact that water molecules in the liquid have a weaker interaction with each other compared to water molecules in ice. Using their interfacial spectroscopy, combined with the controlled heating of the ice crystal, the researchers were able to quantify the change in the interaction between water molecules directly at the interface between ice and air.

The experimental results, combined with the simulations, showed that the first molecular layer at the ice surface has already molten at temperatures as low as -38° C (235 K), the lowest temperature the researchers could experimentally investigate. Increasing the temperature to -16° C (257 K), the second layer becomes liquid. Contrary to popular belief, the surface melting of ice is not a continuous process, but occurs in a discontinuous, layer-by-layer fashion.

“A further important question for us was, whether one could distinguish between the properties of the quasi-liquid layer and those of normal water” says Mischa Bonn, co-author of the paper and director at the MPI-P. And indeed, the quasi-liquid layer at -4° C (269 K) shows a different spectroscopic response than supercooled water at the same temperature; in the quasi-liquid layer, the water molecules seem to interact more strongly than in liquid water.

The results are not only important for a fundamental understanding of ice, but also for climate science, where much research takes place on catalytic reactions on ice surfaces, for which the understanding of the ice surface structure is crucial.

Here’s a link to and a citation for the paper,

Experimental and theoretical evidence for bilayer-by-bilayer surface melting of crystalline ice by M. Alejandra Sánchez, Tanja Kling, Tatsuya Ishiyama, Marc-Jan van Zadel, Patrick J. Bisson, Markus Mezger, Mara N. Jochum, Jenée D. Cyran, Wilbert J. Smit, Huib J. Bakker, Mary Jane Shultz, Akihiro Morita, Davide Donadio, Yuki Nagata, Mischa Bonn, and Ellen H. G. Backus. Proceedings of the National Academy of Science, 2016 DOI: 10.1073/pnas.1612893114 Published online before print December 12, 2016

This paper appears to be open access.

Scientific Christmases in the 19th century

Rupert Cole has written about how an interest in science revived the celebration of Christmas in early 19th century Britain in a Dec. 14, 2012 posting on the Guardian science blogs (Note: I have removed links),

In the first few decades of the 19th century, Christmas was a rather rarefied tradition, kept alive by the nostalgia of poets and antiquarians. Romantically inclined writers such as William B Sandys and Thomas Kibble Hervey feared for the end of “Old Christmas” – the age, they lamented, had become too philosophic, too utilitarian and too refined for boozy wassail bowls, feudal feasts and Lords of Misrule.

On the eve of the Victorian era, however, Christmas underwent a transformation, becoming a popular festival once again – reinvented for the modern age. And as science was reaching unprecedented levels of popularity around the same time, the two cultures overlapped.

Publications like The Illustrated London News and The Leisure Hour printed Christmas essays, stories and poems that celebrated scientific progress. Christmas books and annuals included experiments for children. Newspapers ran adverts for “scientific Christmas presents” and articles describing “Christmas scientific recreations”.

Here’s a great description of a scientific pantomime,

By 1848, festive science was all the rage. That year, the Victoria Theatre staged one of the most sensational and oversubscribed pantomimes of the decade. E L Blanchard’s Land of Light, or Harlequin Gas and the Four Elements made “Science” the personified hero.

The opening scene takes place in a “goblin coal mine” 5,000 miles beneath the surface of the Earth, where an unhappy troop of fairies bemoan their banishment from the science-enamoured society above. The character Science arrives, challenging the fairies to a contest of traditional panto magic.

Science steals the show by combusting a slab of coal. The stage directions at this point indicate that the player Gas appears from the coal “with flame upon his head”. And to further perturb even the most hardened health-and-safety enthusiast, the scene’s magical finale consists of a “magnificent temple” of artificial light, fuelled by a selection of intensely bright (and extremely explosive) gases in use at the time – Budelight, limelight and camphine.

Pantomime became an exclusively Christmas tradition during the Victorian era, but it was much more politically edgy, witty and spectacular than the best of today’s efforts – which tend to rely on the fame and acting abilities of soap stars.

Cole goes on to describe extraordinary science Christmas-themed exhibitions and mentions that even Prince Albert (Queen Victoria’s hubby) made a point of attending one of the myriad science-themed Christmas events of the day.

Prince Albert and the ‘royal children’ attend Michael Faraday’s 1855 Christmas Lecture, ‘The Distinctive Properties of the Common Metals’. Image: Illustrated London News archive (accessed from http://www.guardian.co.uk/science/blog/2012/dec/14/science-christmas-victorian-romance]

There’s more detail and more illustrations in the Cole’s piece which ends with this,

The first of three Royal Institution Christmas Lectures, Air: The Elixir of Life, will be broadcast on BBC Four on 26 December