Tag Archives: bibliometrics

Science publishing, ‘high impact’, reliability, and the practice of science

Konstantin Kakaes has written a provocative and astute article (Feb. 27, 2014 on Slate) about science and publishing, in particular about ‘high impact’ journals.

In 2005, a group of MIT graduate students decided to goof off in a very MIT graduate student way: They created a program called SCIgen that randomly generated fake scientific papers. Thanks to SCIgen, for the last several years, computer-written gobbledygook has been routinely published in scientific journals and conference proceedings. [emphasis mine]

Apparently some well known science publishers have been caught (from the Kakaes article; Note: A link has been removed),

According to Nature News, Cyril Labbé, a French computer scientist, recently informed Springer and the IEEE, two major scientific publishers, that between them, they had published more than 120 algorithmically-generated articles. In 2012, Labbé had told the IEEE of another batch of 85 fake articles. He’s been playing with SCIgen for a few years—in 2010 a fake researcher he created, Ike Antkare, briefly became the 21st most highly cited scientist in Google Scholar’s database.

Kakaes goes on to explain at least in part why this problem has arisen,

Over the course of the second half of the 20th century, two things took place. First, academic publishing became an enormously lucrative business. And second, because administrators erroneously believed it to be a means of objective measurement, the advancement of academic careers became conditional on contributions to the business of academic publishing.

As Peter Higgs said after he won last year’s Nobel Prize in physics, “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.” Jens Skou, a 1997 Nobel Laureate, put it this way in his Nobel biographical statement: today’s system puts pressure on scientists for, “too fast publication, and to publish too short papers, and the evaluation process use[s] a lot of manpower. It does not give time to become absorbed in a problem as the previous system [did].”

Today, the most critical measure of an academic article’s importance is the “impact factor” of the journal it is published in. The impact factor, which was created by a librarian named Eugene Garfield in the early 1950s, measures how often articles published in a journal are cited. Creating the impact factor helped make Garfield a multimillionaire—not a normal occurrence for librarians.

The concern about ‘impact factors’ high or low with regard to science publishing is a discussion I first stumbled across and mentioned in an April 22, 2010 posting where I noted the concern with metrics extends beyond an individual career or university’s reputation but also affects national reputations. Kostas Kostarelos in a Jan. 24, 2014 posting on the Guardian science blogs notes this in his discussion of how China’s policies could affect the practice of science (Note: Links have been removed),

…  For example, if a Chinese colleague publishes an article in a highly regarded scientific journal they will be financially rewarded by the government – yes, a bonus! – on the basis of an official academic reward structure. Publication in one of the highest impact journals is currently rewarded with bonuses in excess of $30,000 – which is surely more than the annual salary of a starting staff member in any lab in China.

Such practices are disfiguring the fundamental principles of ethical integrity in scientific reporting and publishing, agreed and accepted by the scientific community worldwide. They introduce motives that have the potential to seriously corrupt the triangular relationship between scientist or clinician, publisher or editor and the public (taxpayer) funding agency. They exacerbate the damage caused by journal quality rankings based on “impact factor”, which is already recognised by the scientific community in the west as problematic.

Such measures also do nothing to help Chinese journals gain recognition by the rest of the world, as has been described by two colleagues from Zhejiang University in an article entitled “The outflow of academic articles from China: why is it happening and can it be stemmed?”.

At this point we have a system that rewards (with jobs, bonuses, etc.) prolific publication of one’s science achieved either by the sweat of one’s brow (and/or possibly beleaguered students’ brows) or from a clever algorithm. It’s a system that encourages cheating and distorts any picture we might have of scientific achievement on a planetary, national, regional, university, or individual basis.

Clearly we need to do something differently. Kakaes mentions an initiative designed for that purpose, the San Francisco Declaration on Research Assessment (DORA). Please do let me know in the Comments section if there are any other such efforts.

The State of Science and Technology in Canada, 2012 report—examined (part 2: the rest of the report)

The critiques I offered in relation to the report’s  executive summary (written in early Oct. 2012 but not published ’til now) and other materials can remain more or less intact now that I’ve read the rest of the report (State of Science and Technology in Canada, 2012 [link to full PDF report]). Overall, I think it’s a useful and good report despite what I consider to be some significant shortcomings, not least of which is the uncritical acceptance of the view Canada doesn’t patent enough of its science and its copyright laws are insufficient.

My concern regarding the technometrics (counting patents) is definitely not echoed in the report,

One key weakness of these measures is that not all types of technology development lead to patentable technologies. Some, such as software development, are typically subject to copyright instead. This is particularly relevant for research fields where software development may be a key aspect of developing new technologies such as computer sciences or digital media. Even when patenting is applicable as a means of commercializing and protecting intellectual property (IP), not all inventions are patented. (p. 18 print, p. 42 PDF)

In my view this is a little bit like fussing over the electrical wiring when the foundations of your house are  in such bad repair that the whole structure is in imminent danger of falling. As noted in my critique of the executive summary, the patent system in the US and elsewhere is in deep, deep trouble and, is in fact, hindering innovation. Here’s an interesting comment about patent issues being covered in the media (from a Dec. 27, 2012 posting by Mike Masnick for Techdirt),

There’s been a recent uptick in stories about patent trolling getting mainstream media attention, and the latest example is a recent segment on CBS’s national morning program, CBS This Morning, which explored how patent trolls are hurting the US economy …

… After the segment, done by Jeff Glor, one of the anchors specifically says to him [Austin Meyer of the Laminer company which is fighting a patent troll in court and getting coverage on the morning news]: “So it sounds like this is really stifling innovation and it hurts small businesses!”

Getting back to the report, I’m in more sympathy with the panel’s use of  bibliometrics,

As a mode of research assessment, bibliometric analysis has several important advantages. First, these techniques are built on a well-developed foundation of quantitative data. Publication in peer-reviewed journals is a cornerstone of research dissemination in most scientific and academic disciplines, and bibliometric data are therefore one of the few readily available sources of quantitative information on research activity that allow for comparisons across many fields of research. Second, bibliometric analyses are able to provide information about both research productivity (i.e., the quantity of journal articles produced) and research impact (measured through citations). While there are important methodological issues associated with these metrics (e.g., database coverage by discipline, correct procedures for normalization and aggregation, self-citations, and negative citations, etc.), [emphasis mine] most bibliometric experts agree that, when used appropriately, citation based indicators can be valid measures of the degree to which research has had an impact on later scientific work … (p. 15 print, p. 39, PDF)

Still, I do think that a positive publication bias (i.e., the tendency to publish positive results over negative or inclusive results) in the field medical research should have been mentioned as it is a major area of concern in the use  of bibliometrics and especially since one of the identified areas of  Canadian excellence is  in the field of medical research.

The report’s critique of the opinion surveys has to be the least sophisticated in the entire report,

There are limitations related to the use of opinion surveys generally. The most important of these is simply that their results are, in the end, based entirely on the opinions of those surveyed. (p. 20 print, p. 44 PDF)

Let’s see if I’ve got this right. Counting the number of citations a paper, which was peer-reviewed (i.e., a set of experts were asked for their opinions about the paper prior to publication) and which may have been published due to a positive publication, bias yields data (bibliometrics) which are by definition more reliable than an opinion. In short, the Holy Grail (a sacred object in Christian traditions) is data even though that data or ‘evidence’  is provably based on and biased by opinion which the report writers identify as a limitation. Talk about a conundrum.

Sadly the humanities, arts, and social sciences (but especially humanities and arts) posed quite the problem regarding evidence-based analysis,

While the Panel believes that most other evidence-gathering activities undertaken for this assessment are equally valid across all fields, the limitations of bibliometrics led the Panel to seek measures of the impact of HASS [Humanities, Arts, and Social Sciences] research that would be equivalent to the use of bibliometrics, and would measure knowledge dissemination by books, book chapters, international awards, exhibitions, and other arts productions (e.g., theatre, cinema, etc.). Despite considerable efforts to collect information, however, the Panel found the data to be sparse and methods to collect it unreliable, such that it was not possible to draw conclusions from the resulting data. In short, the available data for HASS-specific outputs did not match the quality and rigour of the other evidence collected for this report. As a result, this evidence was not used in the Panel’s deliberations.

Interestingly, the expert panel was led by Dr. Eliot Phillipson, Sir John and Lady Eaton Professor of Medicine Emeritus, [emphasis mine] University of Toronto, who received his MD in 1963. Evidence-based medicine is the ne plus ultra of medical publishing these days. Is this deep distress over a lack of evidence/data in other fields a reflection of the chair’s biases?  In all the discussion and critique of the methodologies, there was no discussion about reflexivity, i. e., the researcher’s or, in this case, the individual panel members’ (individually or collectively) biases and their possible impact on the report. Even with so called evidence-based medicine, bias and opinion are issues.

While the panel was not tasked to look into business-led R&D efforts (there is a forthcoming assessment focused on that question) mention was made in Chapter 3 (Research Investment) of the report. I was particularly pleased to see mention of the now defunct Nortel with its important century long contribution to Canadian R&D efforts. [Full disclosure: I did contract work for Nortel on and off for two years.]

A closer look at recent R&D expenditure trends shows that Canada’s total investment in R&D has declined in real terms between 2006 and 2010, driven mainly by declining private-sector research performance. Both government and higher education R&D expenditures increased modestly over the same five-year period (growing by 4.5 per cent and 7.1 per cent respectively), while business R&D declined by 17 per cent (see Figure 3.3). Much of this decline can be attributed to the failing fortunes and bankruptcy of Nortel Networks Corporation, which was one of Canada’s top corporate R&D spenders for many years. Between 2008 and 2009 alone, global R&D expenditure at Nortel dropped by 48 per cent, from nearly $1.7 billion to approximately $865 million (Re$earch Infosource, 2010) with significant impact on Canada. Although growth in R&D expenditure at other Canadian companies, particularly Research In Motion, partially compensated for the decline at Nortel, the overall downward trend remains. (p. 30 print, p. 54 PDF)

Chapter 4 of the report (Research Productivity and Impact) is filled with colourful tables and various diagrams and charts illustrating areas of strength and weakness within the Canadian research endeavour, my concerns over the metrics notwithstanding. I was a bit startled by our strength in Philosophy and Theology (Table 4.2 on p. 41 print, p. 65 PDF) as it was not touted in the initial publicity about the report. Of course, they can’t mention everything so there are some other pleasant surprises in here. Going in the other direction, I’m a little disturbed by the drop (down from 1.32 in 1999-2004 to 1.12 in 2005-1010) in the ICT (Information and Communication Technologies) specialization index but that is, as the report notes, a consequence of the Nortel loss and ICT scores better in other measures.

I very much appreciated the inclusion of the questions used in the surveys and the order in which they were asked, a practice which seems to be disappearing elsewhere. The discussion about possible biases and how the data was weighted to account for biases is interesting,

Because the responding population was significantly different than the sample population (p<0.01) for some countries, the data were weighted to correct for over- or under-representation. For example, Canadians accounted for 4.4 per cent of top-cited researchers, but 7.0 per cent of those that responded. After weighting, Canadians account for 4.4 per cent in the analyses that follow. This weighting changed overall results of how many people ranked each country in the top five by less than one per cent.

Even with weighting to remove bias in choice to respond, there could be a perception that self-selection is responsible for some results. Top-cited Canadian researchers in the population sample were not excluded from the survey but the results for Canada cannot be explained by self-promotion since 37 per cent of all respondents identified Canada among the top five countries in their field, but only 7 per cent (4.4 per cent after weighting) of respondents were from Canada. Similarly, 94 per cent of respondents identified the United States as a top country in their field, yet only 33 per cent (41 per cent after weighting) were from the United States. Furthermore, only 9 per cent of respondents had either worked or studied in Canada, and 28 per cent had no personal experience of, or association with, Canada or Canadian researchers (see Table 5.2). It is reasonable to conclude that the vast majority of respondents based their evaluation of Canadian S&T on its scientific contributions and reputation alone. (p. 65 print, p. 89 PDF)

There is another possible bias  not mentioned in the report and that has to do with answering the question: What do you think my strengths and weaknesses are? If somebody asks you that question and you are replying directly, you are likely to focus on their strong points and be as gentle as possible about their weaknesses. Perhaps the panel should consider having another country ask those questions about Canadian research. We might find the conversation becomes a little more forthright and critical.

Chapter 6 of the report discusses research collaboration which is acknowledged as poorly served by bibliometrics. Of course, collaboration is a strategy which Canadians have succeeded with not least because we simply don’t have the resources to go it alone.

One of the features I quite enjoyed in this report are the spotlight features. For example, there’s the one on stem cell research,

Spotlight on Canadian Stem Cell Research

Stem cells were discovered by two Canadian researchers, Dr. James Till and the late Dr. Ernest McCulloch, at the University of Toronto over 50 years ago. This great Canadian contribution to medicine laid the foundation for all stem cell research, and put Canada firmly at the forefront of this field, an international leadership position that is still maintained.

Stem cell research, which is increasingly important to the future of cell replacement therapy for diseased or damaged tissues, spans many disciplines. These disciplines include biology, genetics, bioengineering, social sciences, ethics and law, chemical biology, and bioinformatics. The research aims to understand the mechanisms that govern stem cell behaviour, particularly as it relates to disease development and ultimately treatments or cures.

Stem cell researchers in Canada have a strong history of collaboration that has been supported and strengthened since 2001 by the Stem Cell Network (SCN) (one of the federal Networks of Centres of Excellence), a network considered to be a world leader in the field. Grants awarded through the SCN alone have affected the work of more than 125 principal investigators working in 30 institutions from Halifax to Vancouver. Particularly noteworthy institutions include the Terry Fox Laboratory at the BC Cancer Agency; the Hotchkiss Brain Institute in Calgary; Toronto’s Hospital for Sick Children, Mount Sinai Hospital, University Health Network, and the University of Toronto; the Sprott Centre for Stem Cell Research in Ottawa; and the Institute for Research in Immunology and Cancer in Montréal. In 2010, a new Centre for the Commercialization of Regenerative Medicine was formed to further support stem cell initiatives of interest to industry partners.

Today, Canadian researchers are among the most influential in the stem cell and regenerative medicine field. SCN investigators have published nearly 1,000 papers since 2001 in areas such as cancer stem cells; the endogenous repair of heart, muscle, and neural systems; the expansion of blood stem cells for the treatment of a variety of blood-borne diseases; the development of biomaterials for the delivery and support of cellular structures to replace damaged tissues; the direct conversion of skin stem cells to blood; the evolutionary analysis of leukemia stem cells; the identification of pancreatic stem cells; and the isolation of multipotent blood stem cells capable of forming all cells in the human blood system. (p. 96 print, p. 120 PDF)

Getting back to the report and my concerns, Chapter 8 on S&T capacity focuses on science training and education,

• From 2005 to 2009, there were increases in the number of students graduating from Canadian universities at the college, undergraduate, master’s and doctoral levels, with the largest increase at the doctoral level.

• Canada ranks first in the world for its share of population with post-secondary education.

• International students comprise 11 per cent of doctoral students graduating from Canadian universities. The fields with the largest proportions of international students include Earth and Environmental Sciences; Mathematics and Statistics; Agriculture, Fisheries, and Forestry; and Physics and Astronomy.

• From 1997 to 2010, Canada experienced a positive migration flow of researchers, particularly in the fields of Clinical Medicine, Information and Communication Technologies (ICT), Engineering, and Chemistry. Based on Average Relative Citations, the quality of researchers emigrating and immigrating was comparable.

• In three-quarters of fields, the majority of top-cited researchers surveyed thought Canada has world-leading research infrastructure or programs. (p. 118 print, p. 142 PDF)

Getting back to more critical matters, I don’t see a reference to jobs in this report. It’s all very well to graduate a large number of science PhDs, which we do,  but what’s the point if they can’t find work?

  • From 2005 to 2009, there were increases in the number of students graduating from Canadian universities at the college, undergraduate, master’s and doctoral levels, with the largest increase at the doctoral level.
  • Canada ranks first in the world for its share of population with post-secondary education.
  • International students comprise 11 per cent of doctoral students graduating from Canadian universities. The fields with the largest proportions of international students include Earth and Environmental Sciences; Mathematics and Statistics; Agriculture, Fisheries, and Forestry; and Physics and Astronomy.
  • From 1997 to 2010, Canada experienced a positive migration flow of researchers, particularly in the fields of Clinical Medicine, Information and Communication Technologies (ICT), Engineering, and Chemistry. Based on Average Relative Citations, the quality of researchers emigrating and immigrating was comparable.
  • In three-quarters of fields, the majority of top-cited researchers surveyed thought Canada has world-leading research infrastructure or programs. (p. 118 print, p. 142 PDF)

The Black Whole blog on the University Affairs website has discussed and continues to discuss the dearth of jobs in Canada for science graduates.

Chapter 9 of the report breaks down the information on a regional (provincial) bases. As you might expect, the research powerhouses are Ontario, Québec, Alberta and BC. Chapter 10 summarizes the material on a field basis, i.e., Biology; Chemistry; Agriculture, Fisheries, and Forestry; Econ0mics; Social Sciences; etc.  and those results were widely discussed at the time and are mentioned in part 1 of this commentary.

One of the most striking results in the report is Chapter 11: Conclusions,

The geographic distribution of the six fields of strength is difficult to determine with precision because of the diminished reliability of data below the national level, and the vastly different size of the research enterprise in each province.

The most reliable data that are independent of size are provincial ARC scores. Using this metric, the leading provinces in each field are as follows:

  • Clinical Medicine: Ontario, Quebec, British Columbia, Alberta
  • Historical Studies: New Brunswick, Ontario, British Columbia
  • ICT: British Columbia, Ontario
  •  Physics and Astronomy: British Columbia, Alberta, Ontario, Quebec
  • Psychology and Cognitive Sciences: British Columbia, Nova Scotia, Ontario
  • Visual and Performing Arts: Quebec [emphasis mine] (p. 193 print, p. 217 PDF)

Canada has an international reputation in visual and performing which is driven by one province alone.

As for our national fading reputation in natural resources and environmental S&T that seems predictable by almost any informed observer given funding decisions over the last several years.

The report does identify some emerging strengths,

Although robust methods of identifying emerging areas of S&T are still in their infancy, the Panel used new bibliometric techniques to identify research clusters and their rates of growth. Rapidly emerging research clusters in Canada have keywords relating, most notably, to:

• wireless technologies and networking,

• information processing and computation,

• nanotechnologies and carbon nanotubes, and

• digital media technologies.

The Survey of Canadian S&T Experts pointed to personalized medicine and health care, several energy technologies, tissue engineering, and digital media as areas in which Canada is well placed to become a global leader in development and application. (p. 195 print; p. 219 PDF)

I wish I was better and faster at crunching numbers because I’d like to spend time examining the data more closely but the reality is that all data is imperfect so this report like any snapshot is an approximation. Still, I would have liked to have seen some mention of changing practices in science. For example, there’s the protein-folding game, Foldit, which has attracted over 50,000 players (citizen scientists) who have answered questions and posed possibilities that had not occurred to scientists. Whether this trend will continue to disappear is to be answered in the future. What I find disconcerting is how thoroughly this and other shifting practices (scientists publishing research in blogs) and thorny issues such as the highly problematic patent system were ignored. Individual panel members or the report writers themselves may have wanted to include some mention but we’ll never know because the report is presented as a singular, united authority.

In any event, Bravo! to the expert panel and their support team as this can’t have been an easy job.

If you have anything to say about this commentary or the report please do comment, I would love to hear more opinions.

The State of Science and Technology in Canada, 2012 report—examined (part 1: the executive summary)

In my Sept. 27, 2012 posting about its launch,  we celebrated the Council of Canadian Academies, The State of science and Technology in Canada, 2012 report unconditionally. Today (Dec. , 2012), it’s time for a closer look.

I’m going to start with the report’s executive summary and some of the background information. Here’s the question the 18-member expert panel attempted to answer,

What is the current state of science and technology in Canada?

Additional direction was provided through two sub-questions:

Considering both basic and applied research fields, what are the scientific disciplines and technological applications in which Canada excels? How are these strengths distributed geographically across the country? How do these trends compare with what has been taking place in comparable countries?

In which scientific disciplines and technological applications has Canada shown the greatest improvement/decline in the last five years? What major trends have emerged? Which scientific disciplines and technological applications have the potential to emerge as areas of prominent strength for Canada?  (p. xi paper, p. 13 PDF)

Here’s more general information about the expert panel,

The Council appointed a multidisciplinary expert panel (the Panel) to address these questions. The Panel’s mandate spanned the full spectrum of fields in engineering, the natural sciences, health sciences, social sciences, the arts, and humanities. It focused primarily on research performed in the higher education sector, as well as the government and not-for-profit sectors. The mandate specifically excluded an examination of S&T performed in the private sector (which is the subject of a separate Council assessment on the state of industrial research and development). The Panel’s report builds upon, updates, and expands the Council’s 2006 report, The State of Science and Technology in Canada. (p. xi paper, p. 13 PDF)

As I noted in my Sept. 27, 2012 posting, the experts have stated,

  • The six research fields in which Canada excels are: clinical medicine, historical studies, information and communication technologies (ICT), physics and astronomy, psychology and cognitive sciences, and visual and performing arts.
  • Canadian science and technology is healthy and growing in both output and impact. With less than 0.5 per cent of the world’s population, Canada produces 4.1 per cent of the world’s research papers and nearly 5 per cent of the world’s most frequently cited papers.
  • In a survey of over 5,000 leading international scientists, Canada’s scientific research enterprise was ranked fourth highest in the world, after the United States, United Kingdom, and Germany.
  • Canada is part of a network of international science and technology collaboration that includes the most scientifically advanced countries in the world. Canada is also attracting high-quality researchers from abroad, such that over the past decade there has been a net migration of researchers into the country.
  • Ontario, Quebec, British Columbia and Alberta are the powerhouses of Canadian science and technology, together accounting for 97 per cent of total Canadian output in terms of research papers. These provinces also have the best performance in patent-related measures and the highest per capita numbers of doctoral students, accounting for more than 90 per cent of doctoral graduates in Canada in 2009.
  • Several fields of specialization were identified in other provinces, such as: agriculture, fisheries, and forestry in Prince Edward Island and Manitoba; historical studies in New Brunswick; biology in Saskatchewan; as well as earth and environmental sciences in Newfoundland and Labrador and Nova Scotia.

The Council did release a backgrounder describing the methodology the experts used to arrive at their conclusions,

In total, the Panel used a number of different methodologies to conduct this assessment, including: bibliometrics (the study of patterns in peer-reviewed journal articles); technometrics (the analysis of patent statistics and indicators), an analysis of highly qualified and skilled personnel; and opinion surveys of Canadian and international experts.

• To draw comparisons among the results derived through the different methodologies, and to integrate the findings, a common classification system was required. The Panel selected a classification system that includes 22 research fields composed of 176 sub-fields, which included fields in the humanities, arts, and social sciences.

Recognizing that some measurement tools used by the Panel (e.g. bibliometric measures) are a less relevant way of measuring science and technology strength in the humanities, arts, and social sciences, where research advances may be less often communicated in peer-reviewed journal articles, the Panel made considerable attempts to evaluate measures such as books and book chapters, exhibitions, and esteem measures such as international awards. However, the Panel was hampered by a lack of available data. As a result, the information and data collected did not meet the Council’s high standards and was excluded from the assessment.

• The Panel determined two measures of quality, a field’s international average relative citations (ARC) rank and its rank in the international survey, to be the most relevant in determining the field’s position compared with other advanced countries. Based on these measures of quality, the

Bibliometric Analysis (the study of patterns in peer-reviewed journal articles)

• Bibliometric analysis has several advantages, namely, that it is built on a well-developed foundation of quantitative data and it is able to provide information on research productivity and impact.

• For this assessment, the Panel relied heavily on bibliometrics to inform their deliberations. The Panel commissioned a comprehensive analysis of Canadian and world publication trends. It included consideration of many different indicators of output and impact, a study of collaboration patterns, and an analysis of researcher migration. Overall, the resulting research was extensive and critical for determining the research fields in which Canada excels.

• Standard bibliometrics do not identify patterns of collaboration among researchers, and may not adequately capture research activity within an interdisciplinary realm. Therefore, the Panel used advanced bibliometric techniques that allow for the identification of patterns of collaboration between Canadian researchers and those in other countries (based on the co-authorship of research papers); and clusters of related research papers, as an alternative approach to assessing Canada’s research strengths.

Technometrics (analysis of patent statistics and indicators)

• Technometrics is an important tool for determining trends in applied research. This type of analysis is routinely used by the Organisation for Economic Co-operation and Development (OECD) and other international organizations in comparing and assessing science and technology outputs across countries.

• In 2006, the Expert Panel on Science and Technology used technometrics to inform their work. In an effort to ensure consistency between the 2006 and the 2012 assessments, technometrics were once again used as a measurement tool.

• The 2012 Panel commissioned a full analysis of Canadian and international patent holdings in the United States Patent and Trademark Office (USPTO) to capture information about Canada’s patent stock and production of intellectual property relative to other advanced economies. Canadians accounted for 18,000 patented inventions in the USPTO, compared to 12,000 at the Canadian Intellectual Property Office during the period 2005-2010.

Opinion Surveys

• To capture a full range of Canadian science and technology activities and strengths, two extensive surveys were commissioned to gather opinions from Canadian experts and from the top one per cent of cited researchers from around the world.

• A survey of Canadian science and technology experts was conducted for the 2006 report. In

2012 this exercise was repeated, however, the survey was modified with three key changes:

o respondents were pre-chosen to ensure those responding were experts in Canadian science and technology;

o to allow comparisons of bibliometric data, the survey was based on the taxonomy of 22 scientific fields and 176 sub-fields; and

o a question regarding the identification of areas of provincial science and technology strength was added.

• To obtain the opinions of international science and technology experts regarding Canada’s science and technology strengths, the Panel conducted a survey of the top cited one percent of international researchers. Over 5,000 responded to the survey, including Canadians. This survey, combined with the results from the bibliometric analysis were used to determine the top six fields of research in which Canada excels.

..

Research Capacity

• The Panel conducted an analysis related to Canadian research capacity. This analysis drew evidence from a variety of sources including bibliometric data and existing information from publications by organizations such as the OECD and Statistics Canada.

• The Panel was also able to look at various Canadian research capacities which included research infrastructure and facilities, trends in Canada’s research faculty and student populations, the degree of collaboration among researchers in Canada and other countries, and researcher migration between Canada and other countries.

To sum it up, they used bibliometrics (how many citations, publications in peer-reviewed journals, etc.), technometrics (the number of patents filed, etc.), and opinion surveys, along with data from other publications. it sounds very impressive but I am wondering why Canada is so often unmentioned as a top research country in analyses produced outside of Canada. In the 2011 OECD (Organization for Economic Cooperation and Development) Science, Technology, and Industry scorecard, we didn’t place all that well according to my Sept. 27, 2011 posting,

Other topics were covered as well, the page hosting the OECD scorecard information boasts a couple of animations, one of particular interest to me (sadly I cannot embed it here). The item of interest is the animation featuring 30 years of R&D investments in OECD and non-OECD countries. It’s a very lively 16 seconds and you may need to view it a few times. You’ll see some countries rocket out of nowhere to make their appearance on the chart (Finland and Korea come to mind) and you’ll see some countries progress steadily while others fall back. The Canadian trajectory shows slow and steady growth until approximately 2000 when we fall back for a year or two after which we remain stagnant. [emphasis added here]

Notably, the 2012 State of Canadian Science and Technology does not mention investment in this sector as they do in the OECD scorecard and  even though that’s usually one of the measures for assessing the health of your science and technology sector.

For reasons that are somewhat of a mystery to me, the report indicates dissatisfaction with Canada’s patent performance (we don’t patent often enough),

In contrast to the nation’s strong performance in knowledge generation is its weaker performance in patents and related measures. Despite producing 4.1 per cent of the world’s scientific papers, Canada holds only 1.7 per cent of world patents, and in 2010 had a negative balance of nearly five billion dollars in royalties and licensing revenues. Despite its low quantity of patents, Canada excels in international comparisons of quality, with citations to patents (ARC scores), ranking second in the world, behind the United States. (p. xiii print, p. 15 PDF)

I have written extensively about the problems with the patent system, especially the system in the US, as per Billions lost to patent trolls; US White House asks for comments on intellectual property (IP) enforcement; and more on IP, in my June 28, 2012 posting and many others. As an indicator or metric for excellence in science and technology, counting your patents (or technometrics as defined by the Council of Canadian Academies) seems problematic. I appreciate this is a standard technique practiced by other countries but couldn’t the panel have expressed some reservations about the practice? Yes, they mention problems with the methodology but they seem unaware that there is growing worldwide dissatisfaction with patent practices.

Thankfully this report is not just a love letter to ourselves. There was an acknowledgement that some areas of excellence have declined since the 2006 report. For those following the Canadian science and technology scene, it can’t be a surprise to see that natural resources and environmental science and technology (S&T) are among the declining areas (not so coincidentally there is less financial investment by the federal government),

This assessment is, in part, an update of the Council’s 2006 assessment of the state of S&T in Canada. Results of the two assessments are not entirely comparable due to methodological differences such as the bibliometric database and classification system used in the two studies, and the survey of top-cited international researchers which was not undertaken in the 2006 assessment. Nevertheless, the Panel concluded that real improvements have occurred in the magnitude and quality of Canadian S&T in several fields including Biology, Clinical Medicine, ICT, Physics and Astronomy, Psychology and Cognitive Sciences, Public Health and Health Services, and Visual and Performing Arts. Two of the four areas identified as strengths in the 2006 report — ICT and health and related life sciences and technologies — have improved by most measures since 2006.

The other two areas identified as strengths in the 2006 report — natural resources and environmental S&T — have not experienced the same improvement as Canadian S&T in general. In the current classification system, these broad areas are now represented mainly by the fields of Agriculture, Fisheries, and Forestry; and Earth and Environmental Sciences. The Panel mapped the current classification system for these fields to the 2006 system and is confident that the overall decline in these fields is real, and not an artefact of different classifications. Scientific output and impact in these fields were either static or declined in 2005–2010 compared to 1994–2004. It should be noted, however, that even though these fields are declining relative to S&T in general, both maintain considerable strength, with Canadian research in Agriculture, Fisheries, and Forestry ranked second in the world in the survey of international researchers, and Earth and Environmental Sciences ranked fourth.

I’m not sure when I’ll get to part 2 of this as I have much on my plate at the moment but I will get back to this.

Informing research choices—the latest report from the Canadian Council of Academies (part 1: report conclusions and context)

The July 5, 2012 news release from the Canadian Council of Academies (CCA) notes this about the Informing Research Choices: Indicators and Judgment report,

An international expert panel has assessed that decisions regarding science funding and performance can’t be determined by metrics alone. A combination of performance indicators and expert judgment are the best formula for determining how to allocate science funding.

The Natural Sciences and Engineering Research Council of Canada (NSERC) spends approximately one billion dollars a year on scientific research. Over one-third of that goes directly to support discovery research through its flagship Discovery Grants Program (DGP). However, concerns exist that funding decisions are made based on historical funding patterns and that this is not the best way to determine future funding decisions.

As NSERC strives to be at the leading edge for research funding practices, it asked the Council of Canadian Academies to assemble an expert panel that would look at global practices that inform funding allocation, as well as to assemble a library of indicators that can be used when assessing funding decisions. The Council’s expert panel conducted an in-depth assessment and came to a number of evidence-based conclusions.

The panel Chair, Dr. Rita Colwell commented, “the most significant finding of this panel is that quantitative indicators are best interpreted by experts with a deep and nuanced understanding of the research funding contexts in question, and the scientific issues, problems, questions and opportunities at stake.” She also added, “Discovery research in the natural sciences and engineering is a key driver in the creation of many public goods, contributing to economic strength, social stability, and national security. It is therefore important that countries such as Canada have a complete understanding of how best to determine allocations of its science funding.”

… Other panel findings discussed within the report include: a determination that many science indicators and assessment approaches are sufficiently robust; international best practices offer limited insight into science indicator use and assessment strategies; and mapping research funding allocation directly to quantitative indicators is far too simplistic, and is not a realistic strategy for Canada. The Panel also outlines four key principles for the use of indicators that can guide research funders and decision-makers when considering future funding decisions.

The full report, executive summary, abridged report, appendices,  news release, and media backgrounder are available here.

I have taken a look at the full report and, since national funding schemes for the Natural Sciences and Engineering Research Council (and other science funding agencies of this ilk) are not not my area of expertise, the best I can offer is an overview from interested member of the public.

The report provides a very nice introduction to the issues the expert panel was addressing,

The problem of determining what areas of research to fund permeates science policy. Nations now invest substantial sums in supporting discovery research in natural sciences and engineering (NSE). They do so for many reasons. Discovery research helps to generate new technologies; to foster innovation and economic competitiveness; to improve quality of life; and to achieve other widely held social or policy objectives such as improved public health and health care, protection of the environment, and promotion of national security. The body of evidence on the benefits that accrue from these investments is clear: in the long run, public investments in discovery-oriented research yield real and tangible benefits to society across many domains.

These expenditures, however, are accompanied by an obligation to allocate public resources prudently. In times of increasing fiscal pressures and spending accountability, public funders of research often struggle to justify their funding decisions — both to the scientific community and the wider public. How should research funding agencies allocate their budgets across different areas of research? And, once allocations are made, how can the performance of those investments be monitored or assessed over time? These have always been the core questions of science policy, and they remain so today

Such questions are notoriously difficult to answer; however, they are not intractable. An emerging “science of science policy” and the growing field of scientometrics (the study of how to measure, monitor, and assess scientific research) provide quantitative and qualitative tools to support research funding decisions. Although a great deal of controversy remains about what and how to measure, indicatorbased assessments of scientific work are increasingly common. In many cases these assessments indirectly, if not directly, inform research funding decisions.

In some respects, the primary challenge in science assessment today is caused more by an overabundance of indicators than by a lack of them. The plethora of available indicators may make it difficult for policy-makers or research funders to determine which metrics are most appropriate and informative in specific contexts. (p. 2 print version, p. 22 PDF)

Assessment systems tied to the allocation of public funds can be expected to be contentious. Since research funding decisions directly affect the income and careers of researchers, assessment systems linked to those decisions will invariably have an impact on researcher behaviour. Past experiences with science assessment initiatives have sometimes yielded unintended, and undesirable, impacts. In addition, poorly constructed or misused indicators have created scepticism among many scientists and researchers about the value and utility of these measures. As a result, the issues surrounding national science assessment initiatives have increasingly become contentious. In the United Kingdom and Australia, debates about national research assessment have been highly publicized in recent years. While such attention is testimony to the importance of these assessments, the occasionally strident character of the public debate about science metrics and evaluation can impede the development and adoption of good public policy. (p. 3 print version, p. 23 PDF)

Based on this introduction and the acknowledgement that there are ‘too many metrics’, I was looking for evidence that the panel would have specific recommendations for avoiding an over-reliance on metrics (which I see taking place and accelerating in many areas, not just science funding).

In the next section however, the report focussed on how the expert panel researched this area. They relied on a literature survey (which I’m not going to dwell on) and case studies of the 10 countries they reviewed in depth. Here’s more about the case studies,

The Panel was charged with determining what the approaches used by funding agencies around the world had to offer about the use of science indicators and related best practices in the context of research in the NSE. As a result, the Panel developed detailed case studies on 10 selected countries. The purpose of these case studies was two-fold: (i) to ensure that the Panel had a fully developed, up-to-date understanding of indicators and practices currently used around the world; and (ii) to identify useful lessons for Canada from the experiences of research funding agencies in other countries. Findings and instructive examples drawn from these case studies are highlighted and discussed throughout this report. Summaries of the 10 case studies are presented in Appendix A

The 10 countries selected for the case studies satisfied one or more of the following four criteria established by the Panel:

Knowledge-powerful countries: countries that have demonstrated sustained leadership and commitment at the national level to fostering science and technology and/or supporting research and development in the NSE.

Leaders in science assessment and evaluation: countries that have notable or distinctive experience at the national level with use of science indicators or administration of national science assessment initiatives related to research funding allocation.

Emerging science and technology leaders: countries considered to be emerging “knowledge-powerful” countries and in the process of rapidly expanding support for science and technology, or playing an increasingly important role in the global context of research in the NSE.

Relevance to Canada: countries known to have special relevance to Canada and NSERC because of the characteristics of their systems of government or the nature of their public research funding institutions and mechanisms. (pp. 8-9 print version, pp. 28-29 PDF)

The 10 countries they studied closely are:

  • Australia
  • China
  • Finland
  • Germany
  • the Netherlands
  • Norway
  • Singapore
  • South Korea
  • United Kingdom (that’s more like four countries: Scotland, England, Wales, and Northern Ireland)
  • United States

The panel did also  examine other countries’ funding schemes but not with the same intensity. I didn’t spend a lot of time on the case studies as they were either very general or far too detailed for my interests. Of course, I’m not the target audience.

The report offers a glossary and I highly recommend reading it in full  because the use of language in these report is not necessarily standard English. Here’s an excerpt,

The language used by policy-makers sometimes differs from that used by scientists. [emphasis mine] Even within the literature on science assessment, there can be inconsistency in the use of terms. For purposes of this report, the Panel employed the following definitions:*

Discovery research: inquiry-driven scientific research. Discovery research is experimental or theoretical work undertaken primarily to acquire new knowledge of the underlying foundations of phenomena and observable facts, without application or intended use (based on the OECD definition of “basic research”in OECD, 2002).

Assessment: a general term denoting the act of measuring performance of a field of research in the natural sciences and engineering relative to appropriate international or global standards. Assessments may or may not be connected to funding allocation, and may or may not be undertaken in the context of the evaluation of programs or policies.

Scientometrics: the science of analyzing and measuring science, including all quantitative aspects and models related to the production and dissemination of scientific and technological knowledge (De Bellis, 2009).

Bibliometrics: the quantitative indicators, data, and analytical techniques associated with the study of patterns in publications. In the context of this report, bibliometrics refers to those indicators and techniques based on data drawn from publications (De Bellis, 2009). (p. 10 print version, p. 30 PDF)

Next up: my comments and whether or not I found specific recommendations on how to avoid over-reliance on metrics.

Measuring professional and national scientific achievements; Canadian science policy conferences

I’m going to start with an excellent study about publication bias in science papers and careerism that I stumbled across this morning on physorg.com (from the news item),

Dr [Daniele] Fanelli [University of Edinburgh] analysed over 1300 papers that declared to have tested a hypothesis in all disciplines, from physics to sociology, the principal author of which was based in a U.S. state. Using data from the National Science Foundation, he then verified whether the papers’ conclusions were linked to the states’ productivity, measured by the number of papers published on average by each academic.

Findings show that papers whose authors were based in more “productive” states were more likely to support the tested hypothesis, independent of discipline and funding availability. This suggests that scientists working in more competitive and productive environments are more likely to make their results look “positive”. It remains to be established whether they do this by simply writing the papers differently or by tweaking and selecting their data.

I was happy to find out that Fanelli’s paper has been published by the PLoS [Public Library of Science] ONE , an open access journal. From the paper [numbers in square brackets are citations found at the end of the published paper],

Quantitative studies have repeatedly shown that financial interests can influence the outcome of biomedical research [27], [28] but they appear to have neglected the much more widespread conflict of interest created by scientists’ need to publish. Yet, fears that the professionalization of research might compromise its objectivity and integrity had been expressed already in the 19th century [29]. Since then, the competitiveness and precariousness of scientific careers have increased [30], and evidence that this might encourage misconduct has accumulated. Scientists in focus groups suggested that the need to compete in academia is a threat to scientific integrity [1], and those guilty of scientific misconduct often invoke excessive pressures to produce as a partial justification for their actions [31]. Surveys suggest that competitive research environments decrease the likelihood to follow scientific ideals [32] and increase the likelihood to witness scientific misconduct [33] (but see [34]). However, no direct, quantitative study has verified the connection between pressures to publish and bias in the scientific literature, so the existence and gravity of the problem are still a matter of speculation and debate [35].

Fanelli goes on to describe his research methods and how he came to his conclusion that the pressure to publish may have a significant impact on ‘scientific objectivity’.

This paper provides an interesting counterpoint to a discussion about science metrics or bibliometrics taking place on (the journal) Nature’s website here. It was stimulated by Judith Lane’s recent article titled, Let’s Make Science Metrics More Scientific. The article is open access and comments are invited. From the article [numbers in square brackets refer to citations found at the end of the article],

Measuring and assessing academic performance is now a fact of scientific life. Decisions ranging from tenure to the ranking and funding of universities depend on metrics. Yet current systems of measurement are inadequate. Widely used metrics, from the newly-fashionable Hirsch index to the 50-year-old citation index, are of limited use [1]. Their well-known flaws include favouring older researchers, capturing few aspects of scientists’ jobs and lumping together verified and discredited science. Many funding agencies use these metrics to evaluate institutional performance, compounding the problems [2]. Existing metrics do not capture the full range of activities that support and transmit scientific ideas, which can be as varied as mentoring, blogging or creating industrial prototypes.

The range of comments is quite interesting, I was particularly taken by something Martin Fenner said,

Science metrics are not only important for evaluating scientific output, they are also great discovery tools, and this may indeed be their more important use. Traditional ways of discovering science (e.g. keyword searches in bibliographic databases) are increasingly superseded by non-traditional approaches that use social networking tools for awareness, evaluations and popularity measurements of research findings.

(Fenner’s blog along with more of his comments about science metrics can be found here. If this link doesn’t work, you can get to Fenner’s blog by going to Lane’s Nature article and finding him in the comments section.)

There are a number of issues here: how do we measure science work (citations in other papers?) as well as how do we define the impact of science work (do we use social networks?) which brings the question to: how do we measure the impact when we’re talking about a social network?

Now, I’m going to add timeline as an issue. Over what period of time are we measuring the impact? I ask the question because of the memristor story.  Dr. Leon Chua wrote a paper in 1971 that, apparently, didn’t receive all that much attention at the time but was cited in a 2008 paper which received widespread attention. Meanwhile, Chua had continued to theorize about memristors in a 2003 paper that received so little attention that Chua abandoned plans to write part 2. Since the recent burst of renewed interest in the memristor and his 2003 paper, Chua has decided to follow up with part 2, hopefully some time in 2011. (as per this April 13, 2010 posting) There’s one more piece to the puzzle: an earlier paper by F. Argall. From Blaise Mouttet’s April 5, 2010 comment here on this blog,

In addition HP’s papers have ignored some basic research in TiO2 multi-state resistance switching from the 1960’s which disclose identical results. See F. Argall, “Switching Phenomena in Titanium Oxide thin Films,” Solid State Electronics, 1968.
http://pdf.com.ru/a/ky1300.pdf

[ETA: April 22, 2010 Blaise Mouttet has provided a link to an article  which provides more historical insight into the memristor story. http://knol.google.com/k/memistors-memristors-and-the-rise-of-strong-artificial-intelligence#

How do you measure or even track  all of that? Shy of some science writer taking the time to pursue the story and write a nonfiction book about it.

I’m not counselling that the process be abandoned but since it seems that the people are revisiting the issues, it’s an opportune time to get all the questions on the table.

As for its importance, this process of trying to establish better and new science metrics may seem irrelevant to most people but it has a much larger impact than even the participants appear to realize. Governments measure their scientific progress by touting the number of papers their scientists have produced amongst other measures such as  patents. Measuring the number of published papers has an impact on how governments want to be perceived internationally and within their own borders. Take for example something which has both international and national impact, the recent US National Nanotechnology Initiative (NNI) report to the President’s Council of Science and Technology Advisors (PCAST). The NNI used the number of papers published as a way of measuring the US’s possibly eroding leadership in the field. (China published about 5000 while the US published about 3000.)

I don’t have much more to say other than I hope to see some new metrics.

Canadian science policy conferences

We have two such conferences and both are two years old in 2010. The first one is being held in Gatineau, Québec, May 12 – 14, 2010. Called Public Science  in Canada: Strengthening Science and Policy to Protect Canadians [ed. note: protecting us from what?], the target audience for the conference seems to be government employees. David Suzuki (tv host, scientist, evironmentalist, author, etc.) and Preston Manning (ex-politico) will be co-presenting a keynote address titled: Speaking Science to Power.

The second conference takes place in Montréal, Québec, Oct. 20-22, 2010. It’s being produced by the Canadian Science Policy Centre. Other than a notice on the home page, there’s not much information about their upcoming conference yet.

I did note that Adam Holbrook (aka J. Adam Holbrook) is both speaking at the May conference and is an advisory committee member for the folks who are organizing the October conference. At the May conference, he will be participating in a session titled: Fostering innovation: the role of public S&T. Holbrook is a local (to me) professor as he works at Simon Fraser University, Vancouver, Canada.

That’s all of for today.