Informing research choices—the latest report from the Canadian Council of Academies (part 2: more details and my comments)

In general, I found this to be a thoughtful report, Canadian Council of Academies (CCA) Informing Research Choices: Indicators and Judgment, and have at the most a few criticisms. Starting with this bit about the Discovery Grants Programme (DGP), funded by Canada’s Natural Sciences and Engineering Research Council, and ‘expert judgment’,

The focus of NSERC on science assessment practices is directed partly by a long-standing concern that the allocation of DGP funding across fields is overly dependent on historical funding patterns, and that future allocations should incorporate other factors such as research quality, changes in the scientific landscape, and the emergence of research fields.

This review of international science assessment reveals a diverse landscape of assessment methods and practices. Two of the lessons emerging from the review are especially relevant to the Panel’s charge. First, the national research context is significant in defining a given science assessment, and no single set of indicators for assessment will be ideal in all circumstances, though evidence gathered from examining experiences of other countries may help inform the development of a science assessment strategy for Canada. Second, there is a global trend towards national science assessment models that incorporate both quantitative indicators and expert judgment. [emphases mine] (p. 31 print version, p. 51 PDF)

Ok, how do we define ‘expert’? Especially in light of the fact that  the report discusses ‘peer’ and ‘expert’ review (p. 50 print version, p. 70 PDF). Here’s a definition (or non definition) of ‘expert review’ from the report,

Following the definition provided by the OECD (2008), the Panel uses the term “expert review” to refer to deliberative evaluation processes based on expert judgment used in the context of evaluations of broader research fields or units. (p. 51 print version, p. 71 PDF)

Tautology, anyone?

The report also describes more quantitative measures such as bibliometrics (how many times and where were your scientists published), amongst others.  From the report,

The simplest bibliometric indicators are those based on publication counts. In principle, such counts can be generated for many different types of publications (e.g., books, book chapters). In practice, due to the limitations of coverage in indexed bibliographic databases, existing indicators are most often based on counts of peer-reviewed articles in scientific journals. Basic publication indicators typically take the form of absolute counts of the number of journal articles for a particular unit (e.g., individual, research group, institution, or field) by year or for a period of years. Such indicators are typically framed as a measure of research output.

Additional indicators based on publication counts can be derived from shares of publication counts (e.g., a research group’s share of total publications in an institution, a field’s share of total publications in a country). These share-based indicators generally are used to capture information about the relative importance of research output originating from a particular unit or field. More advanced indicators based on weighted publication counts can also be created when publication output is typically weighted by some measure of the quality of the research outlet. For example, journal impact factors (a measure of the relative citedness of a journal) may be used to give a higher weight to publications in more prestigious or competitive journals. [emphasis mine] Unlike straight publication counts, these metrics also depend on some other measure of quality, either based on citation or on some other assessment of the relative quality of different journals. (pp. 55-56 print version, pp. 75-76 PDF)

There are more bibliometrics discussed along with some of their shortcomings but, interestingly, no mention of open access publishing and its possible impacts on  ‘prestigious journals’ and on the bibliometrics themselves.

Getting back to my question in part 1 ” I was looking for evidence that the panel would have specific recommendations for avoiding an over-reliance on metrics (which I see taking place and accelerating in many areas not just for science funding).”Interestingly the report makes references to qualitative approaches without ever defining it although the the term ‘quantitative indicators’ is described in the glossary,

Quantitative indicators: any indicators constructed from quantitative data (e.g., counts of publications, citations, students, grants, research funding).

The qualitative approaches mentioned  in the report include ‘expert’ review, peer review, and case studies. Since I don’t understand what they mean by ‘expert’, I’m not sure I understand ‘peer’. As for the case studies, here’s how this approach is described (Note: I have removed a footnote),

The case study is perhaps the most common example of other types of qualitative methods used in research assessment. Case studies are often used to explore the wider socio-economic impacts of research. For example, the U.K. Research Excellence Framework (REF) …

Project Retrosight is a Canadian example of the case study approach used in research assessment. Undertaken as part of a multinational study to evaluate the impact of basic biomedical and clinical cardiovascular and stroke research projects, Project Retrosight measured payback of projects using a sampling framework. [emphasis mine]  Despite several limitations to the analysis (e.g., the number of case studies limiting the sample pool from which to draw observations, potential inconsistencies in reporting and comparability), the case study approach provided an effective platform for evaluating both the how and the why of evidence to demonstrate impact. The key findings of the study revealed a broad and diverse range of impacts, with the majority of broader impacts, socio-economic and other, coming from a minority of projects (Wooding et al., 2011).  (p. 53 print version, p. 73 PDF)

My understanding of the word ‘payback’ is that it’s related to the term ‘return on investment’ and that measure requires  quantitative data. If so, how was the Project Retrosight qualitative? The description in the report doesn’t offer that information.

The conclusion from the final paragraph of the report doesn’t offer any answers,

… quantitative indicators are far from obviating the need for human expertise and judgment in the research funding allocation decision process. Indicators should be used to inform rather than replace expert judgment. Given the inherent uncertainty and complexity of science funding decisions, these choices are best left in the hands of well-informed experts with a deep and nuanced understanding of the research funding contexts in question, and the scientific issues, problems, questions, and opportunities at stake. (p. 104 print version, p. 124 PDF)

I very much appreciate the approach the ‘expert’ panel took and the thoughtful nature of the report  but I feel it falls short. The panel offers an exhortation but no recommendations for ensuring that science funding decisions don’t become entirely reliant on metrics; they never do describe what they mean by ‘expert’ or explain the difference between qualitative and quantitative;’ and there’s no mention of ‘trends/disruptive developments’ such as open access publishing, which could have a powerful impact on the materials ‘experts’ use when making their research allocation decisions.

The full report, executive summary, abridged report, appendices,  news release and media backgrounder are available here.

ETA July 9, 2012 12:40 PST: There’s an interview (audio or text depending on your preferences) with Rita Colwell the report’s expert panel at the Canadian Science Policy Centre website here.

Leave a Reply

Your email address will not be published. Required fields are marked *