Tag Archives: DGP

Informing research choices—the latest report from the Canadian Council of Academies (part 2: more details and my comments)

In general, I found this to be a thoughtful report, Canadian Council of Academies (CCA) Informing Research Choices: Indicators and Judgment, and have at the most a few criticisms. Starting with this bit about the Discovery Grants Programme (DGP), funded by Canada’s Natural Sciences and Engineering Research Council, and ‘expert judgment’,

The focus of NSERC on science assessment practices is directed partly by a long-standing concern that the allocation of DGP funding across fields is overly dependent on historical funding patterns, and that future allocations should incorporate other factors such as research quality, changes in the scientific landscape, and the emergence of research fields.

This review of international science assessment reveals a diverse landscape of assessment methods and practices. Two of the lessons emerging from the review are especially relevant to the Panel’s charge. First, the national research context is significant in defining a given science assessment, and no single set of indicators for assessment will be ideal in all circumstances, though evidence gathered from examining experiences of other countries may help inform the development of a science assessment strategy for Canada. Second, there is a global trend towards national science assessment models that incorporate both quantitative indicators and expert judgment. [emphases mine] (p. 31 print version, p. 51 PDF)

Ok, how do we define ‘expert’? Especially in light of the fact that  the report discusses ‘peer’ and ‘expert’ review (p. 50 print version, p. 70 PDF). Here’s a definition (or non definition) of ‘expert review’ from the report,

Following the definition provided by the OECD (2008), the Panel uses the term “expert review” to refer to deliberative evaluation processes based on expert judgment used in the context of evaluations of broader research fields or units. (p. 51 print version, p. 71 PDF)

Tautology, anyone?

The report also describes more quantitative measures such as bibliometrics (how many times and where were your scientists published), amongst others.  From the report,

The simplest bibliometric indicators are those based on publication counts. In principle, such counts can be generated for many different types of publications (e.g., books, book chapters). In practice, due to the limitations of coverage in indexed bibliographic databases, existing indicators are most often based on counts of peer-reviewed articles in scientific journals. Basic publication indicators typically take the form of absolute counts of the number of journal articles for a particular unit (e.g., individual, research group, institution, or field) by year or for a period of years. Such indicators are typically framed as a measure of research output.

Additional indicators based on publication counts can be derived from shares of publication counts (e.g., a research group’s share of total publications in an institution, a field’s share of total publications in a country). These share-based indicators generally are used to capture information about the relative importance of research output originating from a particular unit or field. More advanced indicators based on weighted publication counts can also be created when publication output is typically weighted by some measure of the quality of the research outlet. For example, journal impact factors (a measure of the relative citedness of a journal) may be used to give a higher weight to publications in more prestigious or competitive journals. [emphasis mine] Unlike straight publication counts, these metrics also depend on some other measure of quality, either based on citation or on some other assessment of the relative quality of different journals. (pp. 55-56 print version, pp. 75-76 PDF)

There are more bibliometrics discussed along with some of their shortcomings but, interestingly, no mention of open access publishing and its possible impacts on  ‘prestigious journals’ and on the bibliometrics themselves.

Getting back to my question in part 1 ” I was looking for evidence that the panel would have specific recommendations for avoiding an over-reliance on metrics (which I see taking place and accelerating in many areas not just for science funding).”Interestingly the report makes references to qualitative approaches without ever defining it although the the term ‘quantitative indicators’ is described in the glossary,

Quantitative indicators: any indicators constructed from quantitative data (e.g., counts of publications, citations, students, grants, research funding).

The qualitative approaches mentioned  in the report include ‘expert’ review, peer review, and case studies. Since I don’t understand what they mean by ‘expert’, I’m not sure I understand ‘peer’. As for the case studies, here’s how this approach is described (Note: I have removed a footnote),

The case study is perhaps the most common example of other types of qualitative methods used in research assessment. Case studies are often used to explore the wider socio-economic impacts of research. For example, the U.K. Research Excellence Framework (REF) …

Project Retrosight is a Canadian example of the case study approach used in research assessment. Undertaken as part of a multinational study to evaluate the impact of basic biomedical and clinical cardiovascular and stroke research projects, Project Retrosight measured payback of projects using a sampling framework. [emphasis mine]  Despite several limitations to the analysis (e.g., the number of case studies limiting the sample pool from which to draw observations, potential inconsistencies in reporting and comparability), the case study approach provided an effective platform for evaluating both the how and the why of evidence to demonstrate impact. The key findings of the study revealed a broad and diverse range of impacts, with the majority of broader impacts, socio-economic and other, coming from a minority of projects (Wooding et al., 2011).  (p. 53 print version, p. 73 PDF)

My understanding of the word ‘payback’ is that it’s related to the term ‘return on investment’ and that measure requires  quantitative data. If so, how was the Project Retrosight qualitative? The description in the report doesn’t offer that information.

The conclusion from the final paragraph of the report doesn’t offer any answers,

… quantitative indicators are far from obviating the need for human expertise and judgment in the research funding allocation decision process. Indicators should be used to inform rather than replace expert judgment. Given the inherent uncertainty and complexity of science funding decisions, these choices are best left in the hands of well-informed experts with a deep and nuanced understanding of the research funding contexts in question, and the scientific issues, problems, questions, and opportunities at stake. (p. 104 print version, p. 124 PDF)

I very much appreciate the approach the ‘expert’ panel took and the thoughtful nature of the report  but I feel it falls short. The panel offers an exhortation but no recommendations for ensuring that science funding decisions don’t become entirely reliant on metrics; they never do describe what they mean by ‘expert’ or explain the difference between qualitative and quantitative;’ and there’s no mention of ‘trends/disruptive developments’ such as open access publishing, which could have a powerful impact on the materials ‘experts’ use when making their research allocation decisions.

The full report, executive summary, abridged report, appendices,  news release and media backgrounder are available here.

ETA July 9, 2012 12:40 PST: There’s an interview (audio or text depending on your preferences) with Rita Colwell the report’s expert panel at the Canadian Science Policy Centre website here.

Informing research choices—the latest report from the Canadian Council of Academies (part 1: report conclusions and context)

The July 5, 2012 news release from the Canadian Council of Academies (CCA) notes this about the Informing Research Choices: Indicators and Judgment report,

An international expert panel has assessed that decisions regarding science funding and performance can’t be determined by metrics alone. A combination of performance indicators and expert judgment are the best formula for determining how to allocate science funding.

The Natural Sciences and Engineering Research Council of Canada (NSERC) spends approximately one billion dollars a year on scientific research. Over one-third of that goes directly to support discovery research through its flagship Discovery Grants Program (DGP). However, concerns exist that funding decisions are made based on historical funding patterns and that this is not the best way to determine future funding decisions.

As NSERC strives to be at the leading edge for research funding practices, it asked the Council of Canadian Academies to assemble an expert panel that would look at global practices that inform funding allocation, as well as to assemble a library of indicators that can be used when assessing funding decisions. The Council’s expert panel conducted an in-depth assessment and came to a number of evidence-based conclusions.

The panel Chair, Dr. Rita Colwell commented, “the most significant finding of this panel is that quantitative indicators are best interpreted by experts with a deep and nuanced understanding of the research funding contexts in question, and the scientific issues, problems, questions and opportunities at stake.” She also added, “Discovery research in the natural sciences and engineering is a key driver in the creation of many public goods, contributing to economic strength, social stability, and national security. It is therefore important that countries such as Canada have a complete understanding of how best to determine allocations of its science funding.”

… Other panel findings discussed within the report include: a determination that many science indicators and assessment approaches are sufficiently robust; international best practices offer limited insight into science indicator use and assessment strategies; and mapping research funding allocation directly to quantitative indicators is far too simplistic, and is not a realistic strategy for Canada. The Panel also outlines four key principles for the use of indicators that can guide research funders and decision-makers when considering future funding decisions.

The full report, executive summary, abridged report, appendices,  news release, and media backgrounder are available here.

I have taken a look at the full report and, since national funding schemes for the Natural Sciences and Engineering Research Council (and other science funding agencies of this ilk) are not not my area of expertise, the best I can offer is an overview from interested member of the public.

The report provides a very nice introduction to the issues the expert panel was addressing,

The problem of determining what areas of research to fund permeates science policy. Nations now invest substantial sums in supporting discovery research in natural sciences and engineering (NSE). They do so for many reasons. Discovery research helps to generate new technologies; to foster innovation and economic competitiveness; to improve quality of life; and to achieve other widely held social or policy objectives such as improved public health and health care, protection of the environment, and promotion of national security. The body of evidence on the benefits that accrue from these investments is clear: in the long run, public investments in discovery-oriented research yield real and tangible benefits to society across many domains.

These expenditures, however, are accompanied by an obligation to allocate public resources prudently. In times of increasing fiscal pressures and spending accountability, public funders of research often struggle to justify their funding decisions — both to the scientific community and the wider public. How should research funding agencies allocate their budgets across different areas of research? And, once allocations are made, how can the performance of those investments be monitored or assessed over time? These have always been the core questions of science policy, and they remain so today

Such questions are notoriously difficult to answer; however, they are not intractable. An emerging “science of science policy” and the growing field of scientometrics (the study of how to measure, monitor, and assess scientific research) provide quantitative and qualitative tools to support research funding decisions. Although a great deal of controversy remains about what and how to measure, indicatorbased assessments of scientific work are increasingly common. In many cases these assessments indirectly, if not directly, inform research funding decisions.

In some respects, the primary challenge in science assessment today is caused more by an overabundance of indicators than by a lack of them. The plethora of available indicators may make it difficult for policy-makers or research funders to determine which metrics are most appropriate and informative in specific contexts. (p. 2 print version, p. 22 PDF)

Assessment systems tied to the allocation of public funds can be expected to be contentious. Since research funding decisions directly affect the income and careers of researchers, assessment systems linked to those decisions will invariably have an impact on researcher behaviour. Past experiences with science assessment initiatives have sometimes yielded unintended, and undesirable, impacts. In addition, poorly constructed or misused indicators have created scepticism among many scientists and researchers about the value and utility of these measures. As a result, the issues surrounding national science assessment initiatives have increasingly become contentious. In the United Kingdom and Australia, debates about national research assessment have been highly publicized in recent years. While such attention is testimony to the importance of these assessments, the occasionally strident character of the public debate about science metrics and evaluation can impede the development and adoption of good public policy. (p. 3 print version, p. 23 PDF)

Based on this introduction and the acknowledgement that there are ‘too many metrics’, I was looking for evidence that the panel would have specific recommendations for avoiding an over-reliance on metrics (which I see taking place and accelerating in many areas, not just science funding).

In the next section however, the report focussed on how the expert panel researched this area. They relied on a literature survey (which I’m not going to dwell on) and case studies of the 10 countries they reviewed in depth. Here’s more about the case studies,

The Panel was charged with determining what the approaches used by funding agencies around the world had to offer about the use of science indicators and related best practices in the context of research in the NSE. As a result, the Panel developed detailed case studies on 10 selected countries. The purpose of these case studies was two-fold: (i) to ensure that the Panel had a fully developed, up-to-date understanding of indicators and practices currently used around the world; and (ii) to identify useful lessons for Canada from the experiences of research funding agencies in other countries. Findings and instructive examples drawn from these case studies are highlighted and discussed throughout this report. Summaries of the 10 case studies are presented in Appendix A

The 10 countries selected for the case studies satisfied one or more of the following four criteria established by the Panel:

Knowledge-powerful countries: countries that have demonstrated sustained leadership and commitment at the national level to fostering science and technology and/or supporting research and development in the NSE.

Leaders in science assessment and evaluation: countries that have notable or distinctive experience at the national level with use of science indicators or administration of national science assessment initiatives related to research funding allocation.

Emerging science and technology leaders: countries considered to be emerging “knowledge-powerful” countries and in the process of rapidly expanding support for science and technology, or playing an increasingly important role in the global context of research in the NSE.

Relevance to Canada: countries known to have special relevance to Canada and NSERC because of the characteristics of their systems of government or the nature of their public research funding institutions and mechanisms. (pp. 8-9 print version, pp. 28-29 PDF)

The 10 countries they studied closely are:

  • Australia
  • China
  • Finland
  • Germany
  • the Netherlands
  • Norway
  • Singapore
  • South Korea
  • United Kingdom (that’s more like four countries: Scotland, England, Wales, and Northern Ireland)
  • United States

The panel did also  examine other countries’ funding schemes but not with the same intensity. I didn’t spend a lot of time on the case studies as they were either very general or far too detailed for my interests. Of course, I’m not the target audience.

The report offers a glossary and I highly recommend reading it in full  because the use of language in these report is not necessarily standard English. Here’s an excerpt,

The language used by policy-makers sometimes differs from that used by scientists. [emphasis mine] Even within the literature on science assessment, there can be inconsistency in the use of terms. For purposes of this report, the Panel employed the following definitions:*

Discovery research: inquiry-driven scientific research. Discovery research is experimental or theoretical work undertaken primarily to acquire new knowledge of the underlying foundations of phenomena and observable facts, without application or intended use (based on the OECD definition of “basic research”in OECD, 2002).

Assessment: a general term denoting the act of measuring performance of a field of research in the natural sciences and engineering relative to appropriate international or global standards. Assessments may or may not be connected to funding allocation, and may or may not be undertaken in the context of the evaluation of programs or policies.

Scientometrics: the science of analyzing and measuring science, including all quantitative aspects and models related to the production and dissemination of scientific and technological knowledge (De Bellis, 2009).

Bibliometrics: the quantitative indicators, data, and analytical techniques associated with the study of patterns in publications. In the context of this report, bibliometrics refers to those indicators and techniques based on data drawn from publications (De Bellis, 2009). (p. 10 print version, p. 30 PDF)

Next up: my comments and whether or not I found specific recommendations on how to avoid over-reliance on metrics.