Michael Berger at Nanowerk writes about the importance of determining uncertainty in his Nov. 11, 2011 article, A framework to evaluate the uncertainties of AFM nanomechanical measurements, on Nanowerk. It may seem oxymoronic trying to evaluate uncertainty but it’s done all the time.Take for example a political poll where they tell you how accurate it is likely to be, “19 times of 20.” For another example, there’s also significance (p value) when analyzing statistical data. Here’s a brief description of p value from GraphPad,
Definition of a P value
Consider an experiment where you’ve measured values in two samples, and the means are different. How sure are you that the population means are different as well? There are two possibilities:
- The populations have different means.
- The populations have the same mean, and the difference you observed is a coincidence of random sampling.
The P value is a probability, with a value ranging from zero to one. It is the answer to this question: If the populations really have the same mean overall, what is the probability that random sampling would lead to a difference between sample means as large (or larger) than you observed?
Many people misunderstand what question a P value answers.
If the P value is 0.03, that means that there is a 3% chance of observing a difference as large as you observed even if the two population means are identical. It is tempting to conclude, therefore, that there is a 97% chance that the difference you observed reflects a real difference between populations and a 3% chance that the difference is due to chance. Wrong. What you can say is that random sampling from identical populations would lead to a difference smaller than you observed in 97% of experiments and larger than you observed in 3% of experiments.
You have to choose. Would you rather believe in a 3% coincidence? Or that the population means are really different?
In other words, which one has greater certainty? Getting back to nanotechnology, there’s this from Berger’s article,
“The atomic force microscope is used extensively for measuring the material properties of nanomaterials with nanometer resolution, unfortunately there is a lack of standards and uncertainty quantification in these measurements,” explain Robert Moon, an Adjunct Assistant Professor of Materials Engineering, and Arvind Raman, Professor of Mechanical Engineering, both at Purdue University. “Other fields, such as six sigma standards in industry and beam corrections in scanning electron microscopy, have developed thorough methods for quantifying the uncertainty in a given measurement, model, or system. Broadly speaking these methods can be classified as uncertainty quantification. Without applying the methods of uncertainty quantification to AFM measurements it is impossible to say if the measurements are accurate within 5% or 100%.”
Moon and Raman at Purdue’s Birck Nanotechnology Center and collaborators at the National Institute of Standards and Technology (NIST) including Drs. Jon Pratt and Gordon Shaw, have now presented a framework to ascribe uncertainty to local nanomechanical properties of any nanoparticle or surface measured with the AFM by taking into account the main uncertainty sources inherent in such measurements.
“Our findings demonstrate the inherently large uncertainty associated with certain types of AFM material property measurements,” Ryan Wagner, a graduate student in Raman’s group at Purdue, and the paper’s first author, tells Nanowerk. “Specifically, force-displacements measurements of elastic modulus on thin, stiff samples are very uncertainty because of poor indentation resolution. In addition, our work provides a general framework for evaluating uncertainty in force-displacement based elasticity measurements that is valid for all samples and AFMs.”
Berger’s article offers more details about the process of arriving at a framework for uncertainty and a link to the researchers’ paper.
Tags: A framework to evaluate the uncertainties of AFM nanomechanical measurements, Arvind Raman, Gordon Shaw, Jon Pratt, Michael Berger, NIST, P value, Purdue University, Robert Moon, Ryan Wagner, statistical data, US National Institute of Standards and Technology