There is much to admire in this new research but there’s also a troubling conflation.
An Oct. 14, 2015 University of Cambridge press release (also on EurekAlert) cautions policy makers about making use of experts,
The accuracy and reliability of expert advice is often compromised by “cognitive frailties”, and needs to be interrogated with the same tenacity as research data to avoid weak and ill-informed policy, warn two leading risk analysis and conservation researchers in the journal Nature today.
While many governments aspire to evidence-based policy [emphasis mine], the researchers say the evidence on experts themselves actually shows that they are highly susceptible to “subjective influences” – from individual values and mood, to whether they stand to gain or lose from a decision – and, while highly credible, experts often vastly overestimate their objectivity and the reliability of peers.
They appear to be conflating evidence and expertise. Evidence usually means data while expertise is a more ephemeral concept. (Presumably, an expert is someone whose opinion is respected for one reason or another and who has studied the evidence and drawn some conclusions from it.)
The study described in the press release notes that one of the weaknesses of relying on experts is that they are subject to bias. They don’t mention that evidence or data can also be subject to bias but perhaps that’s why they suggest the experts should provide and assess the evidence on which they are basing their advice,
The researchers caution that conventional approaches of informing policy by seeking advice from either well-regarded individuals or assembling expert panels needs to be balanced with methods that alleviate the effects of psychological and motivational bias.
They offer a straightforward framework for improving expert advice, and say that experts should provide and assess [emphasis mine] evidence on which decisions are made – but not advise decision makers directly, which can skew impartiality.
“We are not advocating replacing evidence with expert judgements, rather we suggest integrating and improving them,” write professors William Sutherland and Mark Burgman from the universities of Cambridge and Melbourne respectively.
“Policy makers use expert evidence as though it were data. So they should treat expert estimates with the same critical rigour that must be applied to data,” they write.
“Experts must be tested, their biases minimised, their accuracy improved, and their estimates validated with independent evidence. Put simply, experts should be held accountable for their opinions.”
Sutherland and Burgman point out that highly regarded experts are routinely shown to be no better than novices at making judgements.
However, several processes have been shown to improve performances across the spectrum, they say, such as ‘horizon scanning’ – identifying all possible changes and threats – and ‘solution scanning’ – listing all possible options, using both experts and evidence, to reduce the risk of overlooking valuable alternatives.
To get better answers from experts, they need better, more structured questions, say the authors. “A seemingly straightforward question, ‘How many diseased animals are there in the area?’ for example, could be interpreted very differently by different people. Does it include those that are infectious and those that have recovered? What about those yet to be identified?” said Sutherland, from Cambridge’s Department of Zoology.
“Structured question formats that extract upper and lower boundaries, degrees of confidence and force consideration of alternative theories are important for shoring against slides into group-think, or individuals getting ascribed greater credibility based on appearance or background,” he said.
When seeking expert advice, all parties must be clear about what they expect of each other, says Burgman, Director of the Centre of Excellence for Biosecurity Risk Analysis. “Are policy makers expecting estimates of facts, predictions of the outcome of events, or advice on the best course of action?”
“Properly managed, experts can help with estimates and predictions, but providing advice assumes the expert shares the same values and objectives as the decision makers. Experts need to stick to helping provide and assess evidence on which such decisions are made,” he said.
Sutherland and Burgman have created a framework of eight key ways to improve the advice of experts. These include using groups – not individuals – with diverse, carefully selected members well within their expertise areas.
They also caution against being bullied or “starstruck” by the over-assertive or heavyweight. “People who are less self-assured will seek information from a more diverse range of sources, and age, number of qualifications and years of experience do not explain an expert’s ability to predict future events – a finding that applies in studies from geopolitics to ecology,” said Sutherland.
Added Burgman: “Some experts are much better than others at estimation and prediction. However, the only way to tell a good expert from a poor one is to test them. Qualifications and experience don’t help to tell them apart.”
“The cost of ignoring these techniques – of using experts inexpertly – is less accurate information and so more frequent, and more serious, policy failures,” write the researchers.
Here’s a link to and a citation for the paper,
Policy advice: Use experts wisely by William J. Sutherland & Mark Burgman. Nature 526, 317–318 (15 October 2015) doi:10.1038/526317a
It’s good to see a nuanced attempt to counteract mindless adherence to expert opinion. I hope they will include evidence and data as needing to be approached cautiously in future work.