The Cultural Cognition Project is a group of scholars interested in studying how cultural values shape public risk perceptions and related policy beliefs. Cultural cognition refers to the tendency of individuals to conform their beliefs about disputed matters of fact (e.g., whether global warming is a serious threat; whether the death penalty deters murder; whether gun control makes society more safe or less) to values that define their cultural identities.Project members are using the methods of various disciplines — including social psychology, anthropology, communications, and political science — to chart the impact of this phenomenon and to identify the mechanisms through which it operates. The Project also has an explicit normative objective: to identify processes of democratic decisionmaking by which society can resolve culturally grounded differences in belief in a manner that is both congenial to persons of diverse cultural outlooks and consistent with sound public policymaking.
Disputes over science-related policy issues such as climate change or fracking often seem as intractable as other politically charged debates. But in science, at least, simple curiosity might help bridge that partisan divide, according to new research.
In a study slated for publication in the journal Advances in Political Psychology, a Yale-led research team found that people who are curious about science are less polarized in their views on contentious issues than less-curious peers.
In an experiment, they found out why: Science-curious individuals are more willing to engage with surprising information that runs counter to their political predispositions.
“It’s a well-established finding that most people prefer to read or otherwise be exposed to information that fits rather than challenges their political preconceptions,” said research team leader Dan Kahan, Elizabeth K. Dollard Professor of Law and professor of psychology at Yale Law School. “This is called the echo-chamber effect.”
But science-curious individuals are more likely to venture out of that chamber, he said.
“When they are offered the choice to read news articles that support their views or challenge them on the basis of new evidence, science-curious individuals opt for the challenging information,” Kahan said. “For them, surprising pieces of evidence are bright shiny objects — they can’t help but grab at them.”
Kahan and other social scientists previously have shown that information based on scientific evidence can actually intensify — rather than moderate — political polarization on contentious topics such as gun control, climate change, fracking, or the safety of certain vaccines. The new study, which assessed science knowledge among subjects, reiterates the gaping divide separating how conservatives and liberals view science.
Republicans and Democrats with limited knowledge of science were equally likely to agree or disagree with the statement that “there is solid evidence that global warming is caused by human activity. However, the most science-literate conservatives were much more likely to disagree with the statement than less-knowledgeable peers. The most knowledgeable liberals almost universally agreed with the statement.
“Whatever measure of critical reasoning we used, we always observed this depressing pattern: The members of the public most able to make sense of scientific evidence are in fact the most polarized,” Kahan said.
But knowledge of science, and curiosity about science, are not the same thing, the study shows.
The team became interested in curiosity because of its ongoing collaborative research project to improve public engagement with science documentaries involving the Cultural Cognition Project at Yale Law School, the Annenberg Public Policy Center of the University of Pennsylvania, and Tangled Bank Studios at the Howard Hughes Medical Institute.
They noticed that the curious — those who sought out science stories for personal pleasure — not only were more interested in viewing science films on a variety of topics but also did not display political polarization associated with contentious science issues.
The new study found, for instance, that a much higher percentage of curious liberals and conservatives chose to read stories that ran counter to their political beliefs than did their non-curious peers.
“As their science curiosity goes up, the polarizing effects of higher science comprehension dissipate, and people move the same direction on contentious policies like climate change and fracking,” Kahan said.
It is unclear whether curiosity applied to other controversial issues can minimize the partisan rancor that infects other areas of society. But Kahan believes that the curious from both sides of the political and cultural divide should make good ambassadors to the more doctrinaire members of their own groups.
“Politically curious people are a resource who can promote enlightened self-government by sharing scientific information they are naturally inclined to learn and share,” he said.
Here’s my standard link to and citation for the paper,
Science Curiosity and Political Information Processing by Dan M. Kahan, Asheley R Landrum, Katie Carpenter, Laura Helft, and Kathleen Hall Jamieson. Political Psychology Volume 38, Issue Supplement S1 February 2017 Pages 179–199 DOI: 10.1111/pops.12396View First published: 26 January 2017
This paper is open and it can also be accessed here.
I last mentioned Kahan and The Cultural Cognition Project in an April 10, 2014 posting (scroll down about 45% of the way) about responsible science.
Many scientists and science communicators have grappled with disregard for, or inappropriate use of, scientific evidence for years – especially around contentious issues like the causes of global warming, or the benefits of vaccinating children. A long debunked study on links between vaccinations and autism, for instance, cost the researcher his medical license but continues to keep vaccination rates lower than they should be.
Only recently, however, have people begun to think systematically about what actually works to promote better public discourse and decision-making around what is sometimes controversial science. Of course scientists would like to rely on evidence, generated by research, to gain insights into how to most effectively convey to others what they know and do.
As it turns out, the science on how to best communicate science across different issues, social settings and audiences has not led to easy-to-follow, concrete recommendations.
About a year ago, the National Academies of Sciences, Engineering and Medicine brought together a diverse group of experts and practitioners to address this gap between research and practice. The goal was to apply scientific thinking to the process of how we go about communicating science effectively. Both of us were a part of this group (with Dietram as the vice chair).
The public draft of the group’s findings – “Communicating Science Effectively: A Research Agenda” – has just been published. In it, we take a hard look at what effective science communication means and why it’s important; what makes it so challenging – especially where the science is uncertain or contested; and how researchers and science communicators can increase our knowledge of what works, and under what conditions.
At some level, all science communication has embedded values. Information always comes wrapped in a complex skein of purpose and intent – even when presented as impartial scientific facts. Despite, or maybe because of, this complexity, there remains a need to develop a stronger empirical foundation for effective communication of and about science.
Addressing this, the National Academies draft report makes an extensive number of recommendations. A few in particular stand out:
Use a systems approach to guide science communication. In other words, recognize that science communication is part of a larger network of information and influences that affect what people and organizations think and do.
Assess the effectiveness of science communication. Yes, researchers try, but often we still engage in communication first and evaluate later. Better to design the best approach to communication based on empirical insights about both audiences and contexts. Very often, the technical risk that scientists think must be communicated have nothing to do with the hopes or concerns public audiences have.
Get better at meaningful engagement between scientists and others to enable that “honest, bidirectional dialogue” about the promises and pitfalls of science that our committee chair Alan Leshner and others have called for.
Consider social media’s impact – positive and negative.
Work toward better understanding when and how to communicate science around issues that are contentious, or potentially so.
The paper version of the book has a cost but you can get a free online version. Unfortunately, I cannot copy and paste the book’s table of contents here and was not able to find a book index although there is a handy list of reference texts.
I have taken a very quick look at the book. If you’re in the field, it’s definitely worth a look. It is, however, written for and by academics. If you look at the list of writers and reviewers, you will find over 90% are professors at one university or another. That said, I was happy to see references to Dan Kahan’s work at the Yale Law School’s Culture Cognition Project cited. As happens they weren’t able to cite his latest work [***see my xxx, 2017 curiosity post***], released about a month after “Communicating Science Effectively: A Research Agenda.”
I was unable to find any reference to science communication via popular culture. I’m a little dismayed as I feel that this is a seriously ignored source of information by science communication specialists and academicians but not by the folks at MIT (Massachusetts Institute of Technology) who announced a wireless app in the same week as it was featured in an episode of the US television comedy, The Big Bang Theory. Here’s more from MIT’s emotion detection wireless app in a Feb. 1, 2017 news release (also on EurekAlert),
It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely stressful. But what if there was a more objective way to measure and understand our interactions?
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute of Medical Engineering and Science (IMES) say that they’ve gotten closer to a potential solution: an artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a person’s speech patterns and vitals.
“Imagine if, at the end of a conversation, you could rewind it and see the moments when the people around you felt the most anxious,” says graduate student Tuka Alhanai, who co-authored a related paper with PhD candidate Mohammad Ghassemi that they will present at next week’s Association for the Advancement of Artificial Intelligence (AAAI) conference in San Francisco. “Our work is a step in this direction, suggesting that we may not be that far away from a world where people can have an AI social coach right in their pocket.”
As a participant tells a story, the system can analyze audio, text transcriptions, and physiological signals to determine the overall tone of the story with 83 percent accuracy. Using deep-learning techniques, the system can also provide a “sentiment score” for specific five-second intervals within a conversation.
“As far as we know, this is the first experiment that collects both physical data and speech data in a passive but robust way, even while subjects are having natural, unstructured interactions,” says Ghassemi. “Our results show that it’s possible to classify the emotional tone of conversations in real-time.”
The researchers say that the system’s performance would be further improved by having multiple people in a conversation use it on their smartwatches, creating more data to be analyzed by their algorithms. The team is keen to point out that they developed the system with privacy strongly in mind: The algorithm runs locally on a user’s device as a way of protecting personal information. (Alhanai says that a consumer version would obviously need clear protocols for getting consent from the people involved in the conversations.)
How it works
Many emotion-detection studies show participants “happy” and “sad” videos, or ask them to artificially act out specific emotive states. But in an effort to elicit more organic emotions, the team instead asked subjects to tell a happy or sad story of their own choosing.
Subjects wore a Samsung Simband, a research device that captures high-resolution physiological waveforms to measure features such as movement, heart rate, blood pressure, blood flow, and skin temperature. The system also captured audio data and text transcripts to analyze the speaker’s tone, pitch, energy, and vocabulary.
“The team’s usage of consumer market devices for collecting physiological data and speech data shows how close we are to having such tools in everyday devices,” says Björn Schuller, professor and chair of Complex and Intelligent Systems at the University of Passau in Germany, who was not involved in the research. “Technology could soon feel much more emotionally intelligent, or even ‘emotional’ itself.”
After capturing 31 different conversations of several minutes each, the team trained two algorithms on the data: One classified the overall nature of a conversation as either happy or sad, while the second classified each five-second block of every conversation as positive, negative, or neutral.
Alhanai notes that, in traditional neural networks, all features about the data are provided to the algorithm at the base of the network. In contrast, her team found that they could improve performance by organizing different features at the various layers of the network.
“The system picks up on how, for example, the sentiment in the text transcription was more abstract than the raw accelerometer data,” says Alhanai. “It’s quite remarkable that a machine could approximate how we humans perceive these interactions, without significant input from us as researchers.”
Indeed, the algorithm’s findings align well with what we humans might expect to observe. For instance, long pauses and monotonous vocal tones were associated with sadder stories, while more energetic, varied speech patterns were associated with happier ones. In terms of body language, sadder stories were also strongly associated with increased fidgeting and cardiovascular activity, as well as certain postures like putting one’s hands on one’s face.
On average, the model could classify the mood of each five-second interval with an accuracy that was approximately 18 percent above chance, and a full 7.5 percent better than existing approaches.
The algorithm is not yet reliable enough to be deployed for social coaching, but Alhanai says that they are actively working toward that goal. For future work the team plans to collect data on a much larger scale, potentially using commercial devices such as the Apple Watch that would allow them to more easily implement the system out in the world.
“Our next step is to improve the algorithm’s emotional granularity so that it is more accurate at calling out boring, tense, and excited moments, rather than just labeling interactions as ‘positive’ or ‘negative,’” says Alhanai. “Developing technology that can take the pulse of human emotions has the potential to dramatically improve how we communicate with each other.”
This research was made possible in part by the Samsung Strategy and Innovation Center.
Episode 14 of season 10 of The Big Bang Theory was titled “The Emotion Detection Automation” (full episode can be found on this webpage) and broadcast on Feb. 2, 2017. There’s also a Feb. 2, 2017 recap (recapitulation) by Lincee Ray for EW.com (it seems Ray is unaware that there really is such a machine),
Who knew we would see the day when Sheldon and Raj figured out solutions for their social ineptitudes? Only The Big Bang Theory writers would think to tackle our favorite physicists’ lack of social skills with an emotion detector and an ex-girlfriend focus group. It’s been a while since I enjoyed both storylines as much as I did in this episode. That’s no bazinga.
When Raj tells the guys that he is back on the market, he wonders out loud what is wrong with his game. Why do women reject him? Sheldon receives the information like a scientist and runs through many possible answers. Raj shuts him down with a simple, “I’m fine.”
Sheldon is irritated when he learns that this obligatory remark is a mask for what Raj is really feeling. It turns out, Raj is not fine. Sheldon whines, wondering why no one just says exactly what’s on their mind. It’s quite annoying for those who struggle with recognizing emotional cues.
Lo and behold, Bernadette recently read about a gizmo that was created for people who have this exact same anxiety. MIT has a prototype, and because Howard is an alum, he can probably submit Sheldon’s name as a beta tester.
Of course this is a real thing. If anyone can build an emotion detector, it’s a bunch of awkward scientists with zero social skills.
This is the first time I’ve noticed an academic institution’s news release to be almost simultaneous with mention of its research in a popular culture television program, which suggests things have come a long way since I featured news about a webinar by the National Academies ‘ Science and Entertainment Exchange for film and television productions collaborating with scientists in an Aug. 28, 2012 post.
One last science/popular culture moment: Hidden Figures, a movie about African American women who were human computers supporting NASA (US National Aeronautics and Space Agency) efforts during the 1960s space race and getting a man on the moon was (shockingly) no. 1 in the US box office for a few weeks (there’s more about the movie here in my Sept. 2, 2016 post covering then upcoming movies featuring science). After the movie was released, Mary Elizabeth Williams wrote up a Jan. 23, 2017 interview with the ‘Hidden Figures’ scriptwriter for Salon.com
I [Allison Schroeder] got on the phone with her [co-producer Renee Witt] and Donna [co-producer Donna Gigliotti] and I said, “You have to hire me for this; I was born to write this.” Donna sort of rolled her eyes and was like, “God, these Hollywood types would say anything.” I said, “No, no, I grew up at Cape Canaveral. My grandmother was a computer programmer at NASA, my grandfather worked on the Mercury prototype, and I interned there all through high school and then the summer after my freshman year at Stanford I interned. I worked at a missile launch company.”
She was like, “OK that’s impressive.” And I said, “No, I literally grew up climbing on the Mercury capsule — hitting all the buttons, trying to launch myself into space.”
She said, “Well do you think you can handle the math?” I said that I had to study a certain amount of math at Stanford for economics degree. She said, “Oh, all right, that sounds pretty good.”
I pitched her a few scenes. I pitched her the end of the movie that you saw with Katherine running the numbers as John Glenn is trying to get up in space. I pitched her the idea of one of the women as a mechanic and to see her legs underneath the engine. You’re used to seeing a guy like that, but what would it be like to see heels and pantyhose and a skirt and she’s a mechanic and fixing something? Those are some of the scenes that I pitched them, and I got the job.
I love that the film begins with setting up their mechanical aptitude. You set up these are women; you set up these women of color. You set up exactly what that means in this moment in history. It’s like you just go from there.
I was on a really tight timeline because this started as an indie film. It was just Donna Gigliotti, Renee Witt, me and the author Margot Lee Shetterly for about a year working on it. I was only given four weeks for research and 12 weeks for writing the first draft. I’m not sure if I hadn’t known NASA and known the culture and just knew what the machines would look like, knew what the prototypes looked like, if I could have done it that quickly. I turned in that draft and Donna was like, “OK you’ve got the math and the science; it’s all here. Now go have fun.” Then I did a few more drafts and that was really enjoyable because I could let go of the fact I did it and make sure that the characters and the drive of the story and everything just fit what needed to happen.
For anyone interested in the science/popular culture connection, David Bruggeman of the Pasco Phronesis blog does a better job than I do of keeping up with the latest doings.
Getting back to ‘Communicating Science Effectively: A Research Agenda’, even with a mention of popular culture, it is a thoughtful book on the topic.
A Danish researcher has published work which suggests that while most scientists believe in conducting science research responsibly, they have multiple and, at times, conflicting definitions for what they mean by responsible. From an April 8, 2014 news item on phys.org,
Responsible research has been put firmly on the political agenda with, for instance, EU’s Horizon 2020 programme in which all research projects must show how they contribute responsibly to society. New research from the University of Copenhagen reveals that the scientists themselves place great emphasis on behaving responsibly; they just disagree on what social responsibility in science entails. Responsibility is, in other words, a matter of perspective.
“We have, on the one hand, scientists who are convinced that they should be left alone in their ivory tower and that neither politicians nor the general public should interfere with their research activities. In their eyes, the key to conducting responsible science is to protect it from external interest because that will introduce harmful biases. Science should therefore be completely independent and self-regulated in order to be responsible,” says communication researcher Maja Horst from the University of Copenhagen. She continues:
“But, on the other hand, there are scientists who believe that the ivory tower should have an open door so that politicians, publics and industry can take part in the development of science. Such engagement is seen as the only way to ensure that science develops in accordance with the needs and values of society, and thereby fulfils its social responsibilities.”
In collaboration with PhD student Cecilie Glerup from Copenhagen Business School, Maja Horst has analysed more than 250 scientific journal articles all concerned with the role of research in society and particularly the notion of responsibility. The results of their analyses have just been published in the Journal of Responsible Innovation.
“We can conclude that all the scientists are deeply concerned that their research is responsible and useful to society; they just disagree about what it means to conduct responsible research – how transparent the ivory tower should be, if you like. This is a problem because if we have different definitions of what it means to be a responsible scientist, it becomes very difficult to have a fruitful discussion about it. It also makes it very difficult to be specific about how we want scientists to act in order to be responsible.”
Researcher Horst goes on to cite the GMO (genetically modified organisms) debate as an example of why scientists should be more ‘responsible’ and communicative (from the news release),
According to Maja Horst, discussions about responsible research are particularly important in light of the fact that entire research areas may be at risk if they are perceived to be irresponsible or controversial.
“Scientists within stem cell research, nanotechnology, or synthetic biology, which is about designing biological organisms, pay a great deal of attention to the way in which their research area is presented in the media and perceived by the public. They all saw how the GMO debate – about research into genetically modified organisms – ended in a deadlock that had serious consequences for robust research projects which simply could not attract funding. No one wanted to be associated with GMO research after the heated debates”, explains Maja Horst and adds:
“With a more balanced discussion of the role and responsibilities of scientists and society, we might have avoided the extremely rigid positions, for or against GMO, which dominated the debate.”
Horst’s last quote suggests that somehow more discussion will help us avoid controversy and rigid positions vis à vis science. In 2009, I wrote a series of posts about public engagement/discussion and my sense that this process is often considered a prophylactic treatment [January 14, 2009: Public understanding/engagement of nanotechnology in Canada?; January 15, 2009: Public understanding of science projects as prophylactic treatments; January 16, 2009: The unpredictability of ‘frankenfoods’]; and January 19, 2009: ‘Magic nano’ and whistling in the dark] to help scientists and policymakers avoid controversy. Quick summary: while I strongly support outreach and discussion, I don’t believe these activities will help us avoid all controversy or more to the point, public panics such as the one in the UK and Europe concerning GMO. While Canadian scientists felt a ‘chill’ regarding GMO research and stem cell research (there was controversy and public panic particularly in the US), the public discourse in Canada never reached the heights of public distress suffered elsewhere.
A colleague (Mairi Welman, District of North Vancouver*) recently (April 7, 2014) sent me a link to an article relevant to this discussion about controversy and science. It’s an interview with Dan Kahan about his work at the Yale School of Law’s Cultural Cognition Project (mentioned somewhere in my 2009 posts) where they examined the means by which people develop their attitudes to emerging science and technology. Interestingly, giving people more information doesn’t necessarily have an impact on their positions once they are assumed. From the April 6, 2014 article by Ezra Klein for Vox.com (this is very much concerned with US politics and the public discourse in Washington, DC but many of the ideas seem applicable elsewhere),
There’s a simple theory underlying much of American politics. It sits hopefully at the base of almost every speech, every op-ed, every article, and every panel discussion. It courses through the Constitution and is a constant in President Obama’s most stirring addresses. It’s what we might call the More Information Hypothesis: the belief that many of our most bitter political battles are mere misunderstandings. The cause of these misunderstandings? Too little information — be it about climate change, or taxes, or Iraq, or the budget deficit. If only the citizenry were more informed, the thinking goes, then there wouldn’t be all this fighting.
But the More Information Hypothesis isn’t just wrong. It’s backwards. Cutting-edge research shows that the more information partisans get, the deeper their disagreements become.
In April and May of 2013, Yale Law professor Dan Kahan — working with coauthors Ellen Peters, Erica Cantrell Dawson, and Paul Slovic — set out to test a question that continuously puzzles scientists: why isn’t good evidence more effective in resolving political debates? For instance, why doesn’t the mounting proof that climate change is a real threat persuade more skeptics?
The leading theory, Kahan and his coauthors wrote, is the Science Comprehension Thesis, which says the problem is that the public doesn’t know enough about science to judge the debate. It’s a version of the More Information Hypothesis: a smarter, better educated citizenry wouldn’t have all these problems reading the science and accepting its clear conclusion on climate change.
But Kahan and his team had an alternative hypothesis. Perhaps people aren’t held back by a lack of knowledge. After all, they don’t typically doubt the findings of oceanographers or the existence of other galaxies. Perhaps there are some kinds of debates where people don’t want to find the right answer so much as they want to win the argument. Perhaps humans reason for purposes other than finding the truth — purposes like increasing their standing in their community, or ensuring they don’t piss off the leaders of their tribe. If this hypothesis proved true, then a smarter, better-educated citizenry wouldn’t put an end to these disagreements. It would just mean the participants are better equipped to argue for their own side.
But Kahan and his coauthors also drafted a politicized version of the problem. This version used the same numbers as the skin-cream question, but instead of being about skin creams, the narrative set-up focused on a proposal to ban people from carrying concealed handguns in public. The 2×2 box now compared crime data in the cities that banned handguns against crime data in the cities that didn’t. In some cases, the numbers, properly calculated, showed that the ban had worked to cut crime. In others, the numbers showed it had failed.
Presented with this problem a funny thing happened: how good subjects were at math stopped predicting how well they did on the test. Now it was ideology that drove the answers. Liberals were extremely good at solving the problem when doing so proved that gun-control legislation reduced crime. But when presented with the version of the problem that suggested gun control had failed, their math skills stopped mattering. They tended to get the problem wrong no matter how good they were at math. Conservatives exhibited the same pattern — just in reverse.
Being better at math didn’t just fail to help partisans converge on the right answer. It actually drove them further apart. Partisans with weak math skills were 25 percentage points likelier to get the answer right when it fit their ideology. Partisans with strong math skills were 45 percentage points likelier to get the answer right when it fit their ideology. The smarter the person is, the dumber politics can make them.
Fascinating, non? It gets better as Klein pursues Kahan’s (and his colleagues’) hypotheses (there’s more in the article as Klein develops the thinking) to a logical end. (Neither challenges the notion of scientific expertise and whether or not scientists might be inclined to ‘misunderstand’ facts.) This article is an excellent introduction to Kahan’s work which he has kept refining since I first stumbled across it in 2007.
The thinking in terms of rigid positions and how we read science and deal with facts according to our political perspectives is quite helpful. However, there’s another aspect (in addition to scientific expertise) that neither Klein nor Kahan address, how does any change occur? For example, if you look at the history of electricity, you’ll find it was hugely controversial. The end of the world was predicted. amongst other things, and I’m sure that advocates for either position (bad electricity/good electricity) processed information and facts in the same fashion as we do about our own controversial science.
In sum, I think Horst has pointed out a very interesting issue with the concept of ‘scientific responsibility’ and along with Kahan’s hypotheses about how we process information and why we might want to willfully blind ourselves to facts helps us to get a step closer to living more thoughtfully but perhaps not less controversially than before.
Here’s a link to and a citation for Horst’s paper,
Anne Fleischman has written a July 3, 2013 article (Pluie de science; Avis d’inexpert) for Québec’s Agence Science-Presse that focuses on the European nanotechnology dialogue project, Nanopinion and its efforts in France and elsewhere in Europe. I last mentioned Nanopinion in an April 23, 2013 posting concerning their sponsored initiative (combined advertising and editorial content?) in the UK’s Guardian newspaper,
Small World, a nanotechnology blog, was launched today (Tuesday, Apr. 23, 2013) on the UK’s Guardian newspaper science blogs network. Here’s more from the Introductory page,
Small World is a blog about new developments in nanotechnology funded by Nanopinion, a European Commission project. All the posts are commissioned by the Guardian, which has complete editorial control over the blog’s contents. The views expressed are those of the authors and not the EC
This summer (2013), Nanopinion will be polling the French and other Europeans regarding their opinion on nanotechnology. From Fleischman’s article (although I will provide a bit of translation, it might be best if you have some French language skills),
Cet été, un peu partout en Europe, on sonde l’opinion du public sur les nanotechnologies. Les gens n’y connaissent rien? Peut-être, mais ils ont certainement quelque chose à en dire.
Avec le projet NANOPINION, l’Europe prend le taureau par les cornes: au lieu d’attendre qu’un éventuel scandale sanitaire vienne éclabousser l’industrie tout en traumatisant les esprits au sujet de ces si mystérieuses nanotechnologies, onze Européens ont décidé de sonder l’opinion publique. Le but: faire remonter les impressions à chaud des populations.
«On ne prétend pas demander à quiconque de se forger une opinion définitive en cinq minutes. Il s’agit de tâter le pouls des gens et de leur faire prendre conscience que, même s’ils n’y connaissent pas grand-chose a priori, ils ont quand même le droit d’avoir un avis», explique Didier Laval, chargé de mission au Réseau des Musées et Centres de science européens, ECSITE, l’un des porteurs du projet.
L’idée: pas la peine d’avoir un doctorat en physique pour avoir voix au chapitre. Une approche qui ouvre la porte à une autre manière d’appréhender la culture scientifique. «Comment motiver des gens à participer à un débat public s’ils sont convaincus qu’ils sont trop ignorants pour le faire? Avec NANOPINION, on veut leur prouver qu’avec très peu d’information de base au départ, ils peuvent quand même se forger une première impression sur un sujet qui les concerne directement même s’ils n’en ont pas conscience», explique Didier Laval.
“Taking the bull by the horns,” Nanopinion will be surveying public opinion in a special way. While it’s not possible to turn people into experts in five minutes, it is possible for people to formulate and express some generalized opinions. (This approach sounds like it’s based on some ideas that came out of work by Dan Kahan and other researchers at the Yale Law School’s Cultural Cognition Project and which I mentioned in a Dec. 9, 2008 posting. The Cultural Cognition Project researchers suggested that a lot of our opinions arise from preexisting cultural values, which we will apply to new technologies.)
Getting back to the translation, Laval and his team want to convince people that they can participate in public dialogues and surveys concerning nanotechnology even if they don’t have a PhD. in physics.
I gather that during the summer, Nanopinion will be popping up everywhere (in the downtown areas of various cities, at music festivals , and elsewhere) with their multimedia stations and friendly folks encouraging the public to participate in a five minute survey. I wonder if they’ve designed the survey to seem like a game. As for popping up at music festivals, that seems to have been a successful science outreach strategy for Guerilla Science, which made an appearance at the 2011 Glastonbury Music Festival (as per my July 12, 2011 posting).
In any event, this seems to be another public dialogue/engagement/survey project as prophylactic treatment. From the Fleischman article,
Il est peu probable que le NANOPINION puisse à lui tout seul mettre un gouvernement à l’abri d’un scandale de type Amiante ou Vache folle si, un jour, un grave dérapage se produisait dans l’industrie des nanotechnologies. Cependant, le projet témoigne d’une volonté de l’Europe d’être davantage à l’écoute de ses citoyens en matière de recherche scientifique : un nouveau paradigme dans les rapports entre la science et la société.
My translation (such as it is): It is highly unlikely that Nonopinion alone can shelter government from nanotechnology scandals similar to the Amiante (?) and ‘mad cow disease’ scandals. Essentially, the existence of this project, Nanopinion, is proof of Europe’s desire to listen to its citizens regarding their opinions on scientific research and its desire to create a new paradigm for science and its relations to society.
Interestingly, it was approximately three years ago that public dialogues about nanotechnology scheduled in various cities in France were either cancelled or abruptly ended as per my Feb. 28, 2010 posting and my March 10, 2010 posting.
As I stated (in different words) yesterday, prophylaxis is one of the unspoken goals for a lot of these public consultation/engagement/understanding science projects. The problem is that you have to figure out how a group is going to react so you can make predictions. The recent write up in Nature Nanotechnology (December 2008 online edition) featuring work from Dan M. Kahan, et al from the Cognition Project at Yale has a very interesting way of analyzing how people arrive at their opinions described in my posting here. These people suggest/predict that learning more about a science or a technology is not going to be helpful since opinions get fixed at an early stage and further intellectual input does not (according to their study) change things.
Presumably people would have behaved similarly (i.e. quickly establishing opinion after a minimum of input) on the introduction of electricity. There are a surprising number of similarities between the technology introductions of then (19th century) and now. If you want to look at some of the text from that period complete with dire predictions about messing with God’s work, there’s an excellent book written by Carolyn Marvin called ‘When Old Technologies were New’.
We are able to predict some things about people individually and in groups but we don’t have a very good record. If we could do it well, every movie would be a financial success, every song would be a hit, and no scientists would ever have the projects halted due to public outcry. More tomorrow.