Tag Archives: science of science policy

Informing research choices—the latest report from the Canadian Council of Academies (part 1: report conclusions and context)

The July 5, 2012 news release from the Canadian Council of Academies (CCA) notes this about the Informing Research Choices: Indicators and Judgment report,

An international expert panel has assessed that decisions regarding science funding and performance can’t be determined by metrics alone. A combination of performance indicators and expert judgment are the best formula for determining how to allocate science funding.

The Natural Sciences and Engineering Research Council of Canada (NSERC) spends approximately one billion dollars a year on scientific research. Over one-third of that goes directly to support discovery research through its flagship Discovery Grants Program (DGP). However, concerns exist that funding decisions are made based on historical funding patterns and that this is not the best way to determine future funding decisions.

As NSERC strives to be at the leading edge for research funding practices, it asked the Council of Canadian Academies to assemble an expert panel that would look at global practices that inform funding allocation, as well as to assemble a library of indicators that can be used when assessing funding decisions. The Council’s expert panel conducted an in-depth assessment and came to a number of evidence-based conclusions.

The panel Chair, Dr. Rita Colwell commented, “the most significant finding of this panel is that quantitative indicators are best interpreted by experts with a deep and nuanced understanding of the research funding contexts in question, and the scientific issues, problems, questions and opportunities at stake.” She also added, “Discovery research in the natural sciences and engineering is a key driver in the creation of many public goods, contributing to economic strength, social stability, and national security. It is therefore important that countries such as Canada have a complete understanding of how best to determine allocations of its science funding.”

… Other panel findings discussed within the report include: a determination that many science indicators and assessment approaches are sufficiently robust; international best practices offer limited insight into science indicator use and assessment strategies; and mapping research funding allocation directly to quantitative indicators is far too simplistic, and is not a realistic strategy for Canada. The Panel also outlines four key principles for the use of indicators that can guide research funders and decision-makers when considering future funding decisions.

The full report, executive summary, abridged report, appendices,  news release, and media backgrounder are available here.

I have taken a look at the full report and, since national funding schemes for the Natural Sciences and Engineering Research Council (and other science funding agencies of this ilk) are not not my area of expertise, the best I can offer is an overview from interested member of the public.

The report provides a very nice introduction to the issues the expert panel was addressing,

The problem of determining what areas of research to fund permeates science policy. Nations now invest substantial sums in supporting discovery research in natural sciences and engineering (NSE). They do so for many reasons. Discovery research helps to generate new technologies; to foster innovation and economic competitiveness; to improve quality of life; and to achieve other widely held social or policy objectives such as improved public health and health care, protection of the environment, and promotion of national security. The body of evidence on the benefits that accrue from these investments is clear: in the long run, public investments in discovery-oriented research yield real and tangible benefits to society across many domains.

These expenditures, however, are accompanied by an obligation to allocate public resources prudently. In times of increasing fiscal pressures and spending accountability, public funders of research often struggle to justify their funding decisions — both to the scientific community and the wider public. How should research funding agencies allocate their budgets across different areas of research? And, once allocations are made, how can the performance of those investments be monitored or assessed over time? These have always been the core questions of science policy, and they remain so today

Such questions are notoriously difficult to answer; however, they are not intractable. An emerging “science of science policy” and the growing field of scientometrics (the study of how to measure, monitor, and assess scientific research) provide quantitative and qualitative tools to support research funding decisions. Although a great deal of controversy remains about what and how to measure, indicatorbased assessments of scientific work are increasingly common. In many cases these assessments indirectly, if not directly, inform research funding decisions.

In some respects, the primary challenge in science assessment today is caused more by an overabundance of indicators than by a lack of them. The plethora of available indicators may make it difficult for policy-makers or research funders to determine which metrics are most appropriate and informative in specific contexts. (p. 2 print version, p. 22 PDF)

Assessment systems tied to the allocation of public funds can be expected to be contentious. Since research funding decisions directly affect the income and careers of researchers, assessment systems linked to those decisions will invariably have an impact on researcher behaviour. Past experiences with science assessment initiatives have sometimes yielded unintended, and undesirable, impacts. In addition, poorly constructed or misused indicators have created scepticism among many scientists and researchers about the value and utility of these measures. As a result, the issues surrounding national science assessment initiatives have increasingly become contentious. In the United Kingdom and Australia, debates about national research assessment have been highly publicized in recent years. While such attention is testimony to the importance of these assessments, the occasionally strident character of the public debate about science metrics and evaluation can impede the development and adoption of good public policy. (p. 3 print version, p. 23 PDF)

Based on this introduction and the acknowledgement that there are ‘too many metrics’, I was looking for evidence that the panel would have specific recommendations for avoiding an over-reliance on metrics (which I see taking place and accelerating in many areas, not just science funding).

In the next section however, the report focussed on how the expert panel researched this area. They relied on a literature survey (which I’m not going to dwell on) and case studies of the 10 countries they reviewed in depth. Here’s more about the case studies,

The Panel was charged with determining what the approaches used by funding agencies around the world had to offer about the use of science indicators and related best practices in the context of research in the NSE. As a result, the Panel developed detailed case studies on 10 selected countries. The purpose of these case studies was two-fold: (i) to ensure that the Panel had a fully developed, up-to-date understanding of indicators and practices currently used around the world; and (ii) to identify useful lessons for Canada from the experiences of research funding agencies in other countries. Findings and instructive examples drawn from these case studies are highlighted and discussed throughout this report. Summaries of the 10 case studies are presented in Appendix A

The 10 countries selected for the case studies satisfied one or more of the following four criteria established by the Panel:

Knowledge-powerful countries: countries that have demonstrated sustained leadership and commitment at the national level to fostering science and technology and/or supporting research and development in the NSE.

Leaders in science assessment and evaluation: countries that have notable or distinctive experience at the national level with use of science indicators or administration of national science assessment initiatives related to research funding allocation.

Emerging science and technology leaders: countries considered to be emerging “knowledge-powerful” countries and in the process of rapidly expanding support for science and technology, or playing an increasingly important role in the global context of research in the NSE.

Relevance to Canada: countries known to have special relevance to Canada and NSERC because of the characteristics of their systems of government or the nature of their public research funding institutions and mechanisms. (pp. 8-9 print version, pp. 28-29 PDF)

The 10 countries they studied closely are:

  • Australia
  • China
  • Finland
  • Germany
  • the Netherlands
  • Norway
  • Singapore
  • South Korea
  • United Kingdom (that’s more like four countries: Scotland, England, Wales, and Northern Ireland)
  • United States

The panel did also  examine other countries’ funding schemes but not with the same intensity. I didn’t spend a lot of time on the case studies as they were either very general or far too detailed for my interests. Of course, I’m not the target audience.

The report offers a glossary and I highly recommend reading it in full  because the use of language in these report is not necessarily standard English. Here’s an excerpt,

The language used by policy-makers sometimes differs from that used by scientists. [emphasis mine] Even within the literature on science assessment, there can be inconsistency in the use of terms. For purposes of this report, the Panel employed the following definitions:*

Discovery research: inquiry-driven scientific research. Discovery research is experimental or theoretical work undertaken primarily to acquire new knowledge of the underlying foundations of phenomena and observable facts, without application or intended use (based on the OECD definition of “basic research”in OECD, 2002).

Assessment: a general term denoting the act of measuring performance of a field of research in the natural sciences and engineering relative to appropriate international or global standards. Assessments may or may not be connected to funding allocation, and may or may not be undertaken in the context of the evaluation of programs or policies.

Scientometrics: the science of analyzing and measuring science, including all quantitative aspects and models related to the production and dissemination of scientific and technological knowledge (De Bellis, 2009).

Bibliometrics: the quantitative indicators, data, and analytical techniques associated with the study of patterns in publications. In the context of this report, bibliometrics refers to those indicators and techniques based on data drawn from publications (De Bellis, 2009). (p. 10 print version, p. 30 PDF)

Next up: my comments and whether or not I found specific recommendations on how to avoid over-reliance on metrics.