Points to anyone who recognized the Cher reference and, for those who don’t, (from the Believe [Cher song] Wikipedia entry),
“Believe” is a song recorded by American singer Cher for her twenty-second album, Believe (1998), …
“Believe” departed from Cher’s pop rock style of the time for an upbeat dance-pop style. It featured a pioneering use of the audio processing software Auto-Tune to create a deliberate vocal distortion, which became known as the “Cher effect”. The lyrics describe empowerment and self-sufficiency after a painful breakup.
“Believe” reached number one in the United States, the United Kingdom, France, Germany, Australia, Canada, Ireland, Scotland, New Zealand, Spain, Italy, Greece, Norway, Sweden, Denmark, Belgium, Hungary, Switzerland, Poland and the Netherlands. It earned Cher a place in the Guinness Book of World Records as the oldest female solo artist to top the Billboard Hot 100 chart, and became the highest-selling single by a solo female artist in the United Kingdom. It is one of the bestselling singles, with sales of over 11 million copies worldwide. Reviewers praised its production and catchiness and named it one of Cher’s most important releases. It was nominated for the Grammy Award for Record of the Year and won Best Dance Recording.
… Scholars and academics noted the way in which Cher was able to re-invent herself and remain fresh and contemporary amidst the more teen pop-based music of the period. They also credited “Believe” for restoring Cher’s popularity and cementing her position as a pop culture icon.
At the end of this posting you’ll find a video of Adam Lambert transforming the song into a ballad and, in the process, moving Cher to tears as she sits in the group of recipients wearing their 2018 Kennedy Center Honors medal.
As for believing in science
A September 26, 2019 Annenberg Public Policy Center at the University of Pennsylvania news release (received via email) announces some research into science, communication, belief, and trust,
Public confidence in science has remained high and stable for years. But recent decades have seen incidents of scientific fraud and misconduct, failure to replicate key findings, and growth in the number of retractions – all of which may affect trust in science.
In an article published this week in PNAS, a group of leaders in science research, scholarship, and communication propose that to sustain a high level of trust in science, scientists must more effectively signal to each other and to the public that their work respects the norms of science.
The authors offer a variety of ways in which researchers and journals can communicate this – among them, greater transparency regarding data and methods; ways to certify the integrity of evidence; the disclosure of competing or relevant interests by authors; and wider adoption of tools such as checklists and badges to signal that studies have met accepted scientific standards.
“This absence [of clear signals] is problematic,” write the researchers, including Kathleen Hall Jamieson, professor of communication at the Annenberg School for Communication at the University of Pennsylvania and director of its Annenberg Public Policy Center (APPC). “Without clear signals, other scientists have difficulties ascertaining confidence in the work, and the press, policy makers, and the public at large may base trust decisions on inappropriate grounds, such as deeply held and irrational biases, nonscientific beliefs, and misdirection by conflicted stakeholders or malicious actors.”
In addition to Jamieson, “Signaling the trustworthiness of science,” published in the Proceedings of the National Academy of Sciences of the United States of America, was written by Marcia McNutt, president of the National Academy of Sciences; Veronique Kiermer, executive editor, Public Library of Science (PLOS); and Richard Sever, assistant director, Cold Spring Harbor Laboratory, and co-founder, bioRxiv. The order of authorship was determined by a coin toss.
The American public and trust in science
The authors write that “science is trustworthy in part because it honors its norms.” These norms include a reliance on statistics; having conclusions that are supported by data; disinterestedness, as seen through the disclosure of potential competing interests; validation by peer review; and ethical treatment of research subjects and animals.
Adherence to these norms increases not just the reliability of the scientific findings, but the likelihood that the public will perceive science itself as reliable. The article cites a 2019 survey of 1,253 U.S. adults conducted by APPC, which found that in deciding whether to believe a scientific finding, 68% of those surveyed said it matters whether the scientists make their data and methods available and are completely transparent about their methods. In addition, 63% said it matters whether the scientists involved in the study disclose the individuals and organizations that funded their work, and 55% said it mattered whether the study has been published in a peer-reviewed journal (Fig. 1). For further details, see the article.
To support trust in science, the authors suggest that scientists need to communicate practices that reinforce the norms, among them:
Have scientists explain the evidence and process by which they came to reconsider and change their views on a scientific issue;
Have journals add links back and forward to retractions and expressions of editorial concern to more clearly indicate unreliability, and to studies that replicate findings to emphasize reliability;
Adopt nuanced signaling language that more clearly explains the process now often called “retraction,” using terms such as “voluntary withdrawal” and “withdrawal for cause” or “editorial withdrawal.” Similarly, replace “conflict of interest” with a more neutral, broadened term such as “relevant interest.”
The authors also call for steps to show that individual studies adhere to these norms, such as:
more refined and standardized series of checklists should be used by journals to show how they protect evidence-gathering and -reporting, including more detailed information about each author’s contributions to a manuscript;
adges such as those used by the Center for Open Science should be more widely adopted to show, for instance, whether scientists on a study have preregistered their hypothesis and met standards for open data and open materials.
The recommendations were developed following an April 2018 conference called “Signals of Trust in Science Communication” which was convened by Cold Spring Harbor Laboratory at the Banbury Center and organized by McNutt, Sever, and Jamieson.
“Science enjoys a relatively high level of public trust,” the authors write. “To sustain this valued commodity, in our increasingly polarized age, scientists and the custodians of science would do well to signal to other researchers and to the public and policy makers the ways in which they are safeguarding science’s norms and improving the practices that protect its integrity as a way of knowing.
“Embedding signals of trust in the reporting of individual studies can help researchers build their peers’ confidence in their work,” they continue. “Publishing platforms that rigorously apply these signals of trust can increase their standing as trustworthy vehicles. But beyond this peer-to-peer communication, the research community and its institutions also can signal to the public and policy makers that the scientific community itself actively protects the trustworthiness of its work.”
Comments about confidence and trust
I like the recommendations about more openness and clarity in science communication.
Here are my caveats. (1) This research and the recommendations seem more oriented to the science community and very serious science hobbyists. Most people (regardless of that survey, which I’d like to look at) outside the range I’ve described are unlikely to hear of these changes let alone express any interest or appreciation. (2) It’s also a little difficult to identify which sciences are included. E.g., what about the social sciences? (3) I don’t like this recommendation: ” replace ‘conflict of interest’ with a more neutral, broadened term such as ‘relevant interest’.” An author who is being funded by a pharmaceutical company doesn’t have just a ‘relevant interest’.
Interestingly, there’s a May 24, 2019 paper (link and citation to follow), ‘Health Misinformation and the Power of Narrative Messaging in the Public Sphere‘ covering some of the same ground about science, trust, and communication but with a primary focus on misinformation, health, narrative, and social media. The folks being considered here are the ‘general public’ not the specialized audiences envisioned by the authors of the first paer,
… The power of social media and the impact of narrative are prevalent and strong, so there is an imperative to strategically draw on their advantages to counter some of their more problematic applications. For example, research has shown that narratives presenting the ramifications of not vaccinating – specifically children’s suffering from preventable illness – can have a real impact on intention to vaccinate [Shelby and Ernst, 2013; Capurro, Greenberg, Dubé and Driedger, 2018]. Additionally, clear and definitive statements with a narrative component, made by respected and trusted voices will prove highly useful, and also provide dependable resources upon which journalists can rely.
Opinion editorials offer another useful pathway for narrative communication – indeed, recent research has found them to have an influence on public perception [Coppock, Ekins, and Kirby, 2018]. That said, science writing could also benefit from narrative style, if applied in a manner that does not compromise the truthfulness and comprehensiveness of the content [Perrault, 2013]. We shouldn’t use narratives to fight anecdote with anecdote. Rather, narratives can serve as a vehicle to communicate science and relevant science-informed policy in a more engaging and digestible manner. The spread of misinformation causes real harm. Unfortunately, countering this noise is growing increasingly more complex and challenging. It will require the use of a host of science communication tools and strategies, including the creative use of narratives.
This paper appears to be a roundup of documents supporting the authors’ perspectives with no critique of their own ideas/perspectives offered. The journal has placed the paper is a section known as ‘Critical Commentaries’.
The approaches offered by the authors of these two papers are compatible with each other. As in all aspects of communication, much depends on your audience. And, a member of an expert audience can be a member of the general public in an area where (s)he has no particular expertise.
Here’s a link to and a citation for the first paper mentioned here,
Signaling the trustworthiness of science by Kathleen Hall Jamieson, Marcia McNutt, Veronique Kiermer, and Richard Sever. PNAS September 24, 2019 116 (39) 19231-19236; First published online: September 23, 2019 https://doi.org/10.1073/pnas.1913039116
Here’s a link and a citation for the second paper mentioned here,
Health Misinformation and the Power of Narrative Messaging in the Public Sphere by Timothy Caulfield, Alessandro R. Marcon, Blake Murdoch, Jasmine M. Brown, Sarah Tinker Perrault, Jonathan Jerry, Jeremy Snyder, Samantha J. Anthony, Stephanie Brooks, Zubin Master, Ubaka Ogbogu, Joshua Greenberg, Amy Zarzeczny, Robyn Hyde-Lay. Canadian Journal of Bioethics, Vol 2 No 2 (2019): Open Issue Published May 24, 2019 DOI: https://doi.org/10.7202/1060911ar
The first article appears to be open access while the second is definitely open access.
Quite the performance, eh? Adam Lambert turned the song into a ballad and even brought Cher to tears. I wonder what it would be like to hear it with the lyric changed to, ‘Do you believe in science?’