Tag Archives: CCA

Science, women and gender in Canada (part 1 of 2)

Titled Strengthening Canada’s Research Capacity: The Gender Dimension; The Expert Panel on Women in University Research, the Council of Canadian Academies (CCA) released their assessment on Nov. 21, 2012, approximately 20 months after the incident which tangentially occasioned it (from the Strengthening … webpage) Note: I have added a reference and link to a report on CERC (Canada Excellence Research Chairs) gender issues in the following excerpt,

After the notable absence of female candidates in the Canada Excellence Research Chairs (CERC) program, the Minister of Industry, in March 2010, struck an ad-hoc panel to examine the program’s selection process. The ad-hoc panel found that the lack of female representation was not due to active choices made during the CERC selection process. [Dowdeswell, E., Fortier, S., & Samarasekera, I. (2010). Report to the Minister of Industry of the Ad Hoc Panel on CERC Gender Issues. Ottawa (ON):Industry Canada.] As a result, the Council of Canadian Academies received a request to undertake an assessment of the factors that influence university research careers of women, both in Canada and internationally.

To conduct the assessment, the Council convened an expert panel of 15 Canadian and international experts from diverse fields, which was chaired by Dr. Lorna Marsden, President emeritus and Professor, York University.

For anyone unfamiliar with the CERC programme,

The Canada Excellence Research Chairs (CERC) Program awards world-class researchers up to $10 million over seven years to establish ambitious research programs at Canadian universities.

My commentary is primarily focused on the assessment and not the preceding report from the ad hoc panel, as well, I am not commenting on every single aspect of the report. I focus on those elements of the report that caught my attention.

There is much to appreciate in this assessment/report unfortunately the cover image cannot be included. By choosing a photograph, the designer immediately entered shark-infested waters, metaphorically speaking. From a semiotic perspective, photographs are a rich and much studied means of criticism. Having a photograph of an attractive, middle-aged white woman with blonde hair (a MILF, depending on your tastes)  who’s surrounded by ‘adoring’ students (standing in for her children?) on the cover of this assessment suggests an obliviousness to nuance that is somewhat unexpected. Happily, the image is not reflective of the content.

The report lays out the basis for this assessment,

There are many reasons for concern at the lack of proportional representation of women in senior positions in all facets of our society, including politics, law, medicine, the arts, business, and academia. The underrepresentation of women in any of these areas is a concern considering the fundamental Canadian values of equality, fairness, and justice, as outlined in the Canadian Human Rights Act, the Canadian Charter of Rights and Freedoms, and the Employment Equity Act. This report focuses on women in academia: the 11,064 women with PhDs who are employed full-time in degree-granting institutions. In comparison, there are 22,875 men in this category (see Table 3.1).1 Besides educating millions of students, these researchers and innovators are working to address the major issues Canada faces in the 21st century, including climate change, demographic shifts, healthcare, social inequality, sustainable natural resources management, cultural survival, as well as the role Canada plays as an international actor. These contributions are in addition to the basic, or knowledge discovery, research that is one of the main duties of academic researchers. In the knowledge economy, a talent pool of Canada’s top thinkers, researchers and innovators is needed to help secure and build Canada’s economic edge. The wider the pool is from which to draw, the more perspectives, experiences, and ideas will be brought to the creative process. [emphasis mine] Arguments for fully including women in research careers range from addressing skills shortages and increasing innovation potential by accessing wider talent pools, to greater market development, stronger financial performance, better returns on human resource investments, and developing a better point from which to compete in the intensifying global talent race. (p. 15 PDF; p. xiii print)

I appreciate the reference to fundamental values in Canadian society as it is important but I suspect the portion I’ve highlighted contains the seeds of an argument that is far more persuasive for power brokers. It was a very smart move.

It is possible to skim this report by simply reading the executive summary and reading the Key Messages page included after each chapter heading, save the final chapter. They’ve done a good job of making this report easy to read if you don’t have too much time but prefer to view the complete assessment rather than an abridged version.

The Chapter 1 Key Messages are,

Chapter Key Messages

• While many reports have focused specifically on women in science, technology, engineering, and mathematics careers, this assessment employs comparative analyses to examine the career trajectories of women researchers across a variety of disciplines. The Panel was able to respond to the charge using a combination of research methods, but their analyses were sometimes hindered by a paucity of key data sets.

• In an attempt not to simply repeat numerous studies of the past on women in research careers, the Panel used a life course model to examine the data from a new perspective. This conceptual framework enabled the Panel to consider the multidimensional nature of human lives as well as the effects of external influences on the career trajectories of women researchers.

• Women are now present in all areas of research, including those areas from which they have previously been absent. Over time, institutions have become more inclusive, and Canadian governments have created policies and legislation to encourage more gender equity. Collective bargaining has contributed to this process. Clearly, the advancement of women in research positions relies on the contributions of individuals, institutions and government.

• Since the 1970s, there has been major progress such that women have been obtaining PhDs and entering the academy as students and faculty at increasing rates. However, women remain underrepresented at the highest levels of academia, as demonstrated by their low numbers in the Canada Research Chairs (CRC) program, and their absence from the Canada Excellence Research Chairs (CERC) program. There is considerable room for improvement in women’s representation as faculty.

• Higher education research and development funding has nearly doubled in the past decade. However, the amount of funding allocated to core grants and scholarship programs varies among the tri-council agencies [SSHRC, Social Science and Humantities Research Council; NSERC, Natural Science and Engineering Research Council; and CIHR, Canadian Institutes of Health Research], with the majority of funds available to researchers sponsored by NSERC and CIHR. This pattern is generally replicated in the Canada Research Chairs and the Canada Excellence Research Chairs programs. As noted in the 2003 Human Rights Complaint regarding the Canada Research Chairs program, women are least represented in the areas of research that are the best funded.  (p. 33 PDF; p. 3 print) [emphasis mine]

This panel in response to the issue of women being least represented in the best funded areas of research elected to do this,

The Panel noted that many reports have focused on women in science, technology, and engineering research careers (due in part to the fact that women have been significantly underrepresented in these fields) yet relatively little attention has been paid to women researchers in the humanities, social sciences, and education. This is despite the fact that 58.6 per cent of doctoral students in these disciplines are women (see Chapter 3), and that their research contributions have profoundly affected the study of poverty, violence, the welfare state, popular culture, and literature, to note only a few examples. Considering this, the Panel’s assessment incorporates a comparative, interdisciplinary analysis, with a focus on the broader category of women in university research. In order to identify the areas where women are the most and least represented, Panellists compiled data and research that describe where Canadian female researchers are — and are not — in terms of both discipline and rank. Where possible, this study also analyzes the situation of women researchers outside of academia so as to paint a clearer picture of female researchers’ career trajectories. (pp. 37/8 PDF; pp. 7/8 print) [emphases mine]

Bringing together all kinds of research where women are both over and under represented and including research undertaken outside the academic environment was thoughtful. I also particularly liked this passage,

American research suggests that holding organizational leaders accountable for implementing equity practices is a particularly effective way of enhancing the diversity of employees (Kalev et al., 2006), indicating that reporting and monitoring mechanisms are key to success. [emphasis mine] The Panel observed that meeting these commitments requires the proper implementation of accountability mechanisms, such as reporting and monitoring schemes. (p. 44 PDF; p. 14 print)

Juxtaposing the comment about leaders being held accountable for equity practices and the  comment I emphasized earlier ” … a talent pool of Canada’s top thinkers, researchers and innovators is needed to help secure and build Canada’s economic edge …” could suggest an emergent theme about leadership and the current discourse about innovation.

To get a sense of which disciplines and what research areas are rewarded within the Canada Research Chair programme read this from the assessment,

Similarly, while 80 per cent of Canada Research Chairs are distributed among researchers in NSERC and CIHR disciplines, SSHRC Chairs represent only 20 per cent of the total — despite the fact that the majority (60 per cent) of the Canadian professoriate come from SSHRC disciplines (Grant & Drakich, 2010). Box 1.1 describes the gendered implications of this distribution, as well as the history of the program. (p. 45 PDF; p. 15 print)

What I find intriguing here isn’t just the disparity. 60% of the researchers are chasing after 20% of the funds (yes, physical sciences are more expensive but those percentages still seem out of line), but that social sciences and the humanities are not really included in the innovation rubric except here in this assessment. Still, despite the inclusion of the visual and performing arts in the State of Science and Technology in Canada, 2012 report issued by the CCA in Sept. 2013 (part 1 of my commentary on that assessment is in this Dec. 28, 2012 posting; part 2 of my commentary is in this Dec. 28, 2012 posting) there is no mention of them in this assessment/report of gender and science.

I did particularly like how the panel approached data collection and analysis,

Coming from a variety of disciplinary backgrounds, Panellists brought with them a range of methodological expertise and preferences. Through a combination of quantitative and qualitative data, the Panel was able to identify and analyze factors that affect the career trajectories of women researchers in Canada (see Appendix 1 for full details). In addition to an extensive literature review of the national and international research and evidence related to the topic, the Panel collected information in the form of data sets and statistics, heard from expert witnesses, conducted interviews with certain stakeholders from academia and industry, and analyzed interview and survey results from their secondary analysis of Canada Research Chairs data (see Appendix 5 for a full description of methodology and results). Together, these methods contributed to the balanced approach that the Panel used to understand the status of women in Canadian university research careers.

In addition, the Panel took an innovative approach to painting a more vibrant picture of the experience of women professors by incorporating examples from academic “life-writing.” Life-writing is the generic name given to a variety of forms of personal narrative — autobiography, biography, personal essays, letters, diaries, and memoirs. Publishing personal testimony is a vital strategy for marginalized groups to claim their voices and tell their own stories, and academic women’s life-writing adds vital evidence to a study of women in university careers (Robbins et al., 2011). The first study of academic life-writing appeared in the U.S. in 2008 (Goodall, 2008); as yet, none exists for Canada.16 Recognizing the benefits of this approach, which focuses on the importance of women’s voices and stories, the Panel chose to weave personal narrative from women academics throughout the body of the report to illuminate the subject matter. As with the data gleaned from the Panel’s secondary analysis of Canada Research Chairs data, these cases highlight the experience of an articulate and determined minority of women who are prepared and positioned to speak out about structural and personal inequities. More comprehensive surveys are required to establish the precise extent of the problems they so effectively illustrate. (pp. 49/50 PDF; pp. 19/20 print)

Nice to note that they include a very broad range of information as evidence. After all, evidence can take many forms and not all evidence can be contained in a table of data nor is all data necessarily evidence. That said there were some other issues with data and evidence,

Despite the extensive literature on the subject, the Panel identified some data limitations. While these limitations made some analyses difficult, the Panel was able to effectively respond to the charge by using the combination of research methods described above. Data limitations identified by the Panel include:

• relatively little research specific to the Canadian context;

• lack of longitudinal data;

• relatively few studies (both quantitative and qualitative) dealing with fields such as the humanities and social sciences;

• lack of data on diversity in Canadian academia, including intersectional data;

• lack of comprehensive data and evidence from the private and government sectors; and

• difficulty in comparing some international data due to differences in disciplinary classifications. (p. 50 PDF; p. 20 print)

I think this does it for part 1 of my commentary.

The State of Science and Technology in Canada, 2012 report—examined (part 2: the rest of the report)

The critiques I offered in relation to the report’s  executive summary (written in early Oct. 2012 but not published ’til now) and other materials can remain more or less intact now that I’ve read the rest of the report (State of Science and Technology in Canada, 2012 [link to full PDF report]). Overall, I think it’s a useful and good report despite what I consider to be some significant shortcomings, not least of which is the uncritical acceptance of the view Canada doesn’t patent enough of its science and its copyright laws are insufficient.

My concern regarding the technometrics (counting patents) is definitely not echoed in the report,

One key weakness of these measures is that not all types of technology development lead to patentable technologies. Some, such as software development, are typically subject to copyright instead. This is particularly relevant for research fields where software development may be a key aspect of developing new technologies such as computer sciences or digital media. Even when patenting is applicable as a means of commercializing and protecting intellectual property (IP), not all inventions are patented. (p. 18 print, p. 42 PDF)

In my view this is a little bit like fussing over the electrical wiring when the foundations of your house are  in such bad repair that the whole structure is in imminent danger of falling. As noted in my critique of the executive summary, the patent system in the US and elsewhere is in deep, deep trouble and, is in fact, hindering innovation. Here’s an interesting comment about patent issues being covered in the media (from a Dec. 27, 2012 posting by Mike Masnick for Techdirt),

There’s been a recent uptick in stories about patent trolling getting mainstream media attention, and the latest example is a recent segment on CBS’s national morning program, CBS This Morning, which explored how patent trolls are hurting the US economy …

… After the segment, done by Jeff Glor, one of the anchors specifically says to him [Austin Meyer of the Laminer company which is fighting a patent troll in court and getting coverage on the morning news]: “So it sounds like this is really stifling innovation and it hurts small businesses!”

Getting back to the report, I’m in more sympathy with the panel’s use of  bibliometrics,

As a mode of research assessment, bibliometric analysis has several important advantages. First, these techniques are built on a well-developed foundation of quantitative data. Publication in peer-reviewed journals is a cornerstone of research dissemination in most scientific and academic disciplines, and bibliometric data are therefore one of the few readily available sources of quantitative information on research activity that allow for comparisons across many fields of research. Second, bibliometric analyses are able to provide information about both research productivity (i.e., the quantity of journal articles produced) and research impact (measured through citations). While there are important methodological issues associated with these metrics (e.g., database coverage by discipline, correct procedures for normalization and aggregation, self-citations, and negative citations, etc.), [emphasis mine] most bibliometric experts agree that, when used appropriately, citation based indicators can be valid measures of the degree to which research has had an impact on later scientific work … (p. 15 print, p. 39, PDF)

Still, I do think that a positive publication bias (i.e., the tendency to publish positive results over negative or inclusive results) in the field medical research should have been mentioned as it is a major area of concern in the use  of bibliometrics and especially since one of the identified areas of  Canadian excellence is  in the field of medical research.

The report’s critique of the opinion surveys has to be the least sophisticated in the entire report,

There are limitations related to the use of opinion surveys generally. The most important of these is simply that their results are, in the end, based entirely on the opinions of those surveyed. (p. 20 print, p. 44 PDF)

Let’s see if I’ve got this right. Counting the number of citations a paper, which was peer-reviewed (i.e., a set of experts were asked for their opinions about the paper prior to publication) and which may have been published due to a positive publication, bias yields data (bibliometrics) which are by definition more reliable than an opinion. In short, the Holy Grail (a sacred object in Christian traditions) is data even though that data or ‘evidence’  is provably based on and biased by opinion which the report writers identify as a limitation. Talk about a conundrum.

Sadly the humanities, arts, and social sciences (but especially humanities and arts) posed quite the problem regarding evidence-based analysis,

While the Panel believes that most other evidence-gathering activities undertaken for this assessment are equally valid across all fields, the limitations of bibliometrics led the Panel to seek measures of the impact of HASS [Humanities, Arts, and Social Sciences] research that would be equivalent to the use of bibliometrics, and would measure knowledge dissemination by books, book chapters, international awards, exhibitions, and other arts productions (e.g., theatre, cinema, etc.). Despite considerable efforts to collect information, however, the Panel found the data to be sparse and methods to collect it unreliable, such that it was not possible to draw conclusions from the resulting data. In short, the available data for HASS-specific outputs did not match the quality and rigour of the other evidence collected for this report. As a result, this evidence was not used in the Panel’s deliberations.

Interestingly, the expert panel was led by Dr. Eliot Phillipson, Sir John and Lady Eaton Professor of Medicine Emeritus, [emphasis mine] University of Toronto, who received his MD in 1963. Evidence-based medicine is the ne plus ultra of medical publishing these days. Is this deep distress over a lack of evidence/data in other fields a reflection of the chair’s biases?  In all the discussion and critique of the methodologies, there was no discussion about reflexivity, i. e., the researcher’s or, in this case, the individual panel members’ (individually or collectively) biases and their possible impact on the report. Even with so called evidence-based medicine, bias and opinion are issues.

While the panel was not tasked to look into business-led R&D efforts (there is a forthcoming assessment focused on that question) mention was made in Chapter 3 (Research Investment) of the report. I was particularly pleased to see mention of the now defunct Nortel with its important century long contribution to Canadian R&D efforts. [Full disclosure: I did contract work for Nortel on and off for two years.]

A closer look at recent R&D expenditure trends shows that Canada’s total investment in R&D has declined in real terms between 2006 and 2010, driven mainly by declining private-sector research performance. Both government and higher education R&D expenditures increased modestly over the same five-year period (growing by 4.5 per cent and 7.1 per cent respectively), while business R&D declined by 17 per cent (see Figure 3.3). Much of this decline can be attributed to the failing fortunes and bankruptcy of Nortel Networks Corporation, which was one of Canada’s top corporate R&D spenders for many years. Between 2008 and 2009 alone, global R&D expenditure at Nortel dropped by 48 per cent, from nearly $1.7 billion to approximately $865 million (Re$earch Infosource, 2010) with significant impact on Canada. Although growth in R&D expenditure at other Canadian companies, particularly Research In Motion, partially compensated for the decline at Nortel, the overall downward trend remains. (p. 30 print, p. 54 PDF)

Chapter 4 of the report (Research Productivity and Impact) is filled with colourful tables and various diagrams and charts illustrating areas of strength and weakness within the Canadian research endeavour, my concerns over the metrics notwithstanding. I was a bit startled by our strength in Philosophy and Theology (Table 4.2 on p. 41 print, p. 65 PDF) as it was not touted in the initial publicity about the report. Of course, they can’t mention everything so there are some other pleasant surprises in here. Going in the other direction, I’m a little disturbed by the drop (down from 1.32 in 1999-2004 to 1.12 in 2005-1010) in the ICT (Information and Communication Technologies) specialization index but that is, as the report notes, a consequence of the Nortel loss and ICT scores better in other measures.

I very much appreciated the inclusion of the questions used in the surveys and the order in which they were asked, a practice which seems to be disappearing elsewhere. The discussion about possible biases and how the data was weighted to account for biases is interesting,

Because the responding population was significantly different than the sample population (p<0.01) for some countries, the data were weighted to correct for over- or under-representation. For example, Canadians accounted for 4.4 per cent of top-cited researchers, but 7.0 per cent of those that responded. After weighting, Canadians account for 4.4 per cent in the analyses that follow. This weighting changed overall results of how many people ranked each country in the top five by less than one per cent.

Even with weighting to remove bias in choice to respond, there could be a perception that self-selection is responsible for some results. Top-cited Canadian researchers in the population sample were not excluded from the survey but the results for Canada cannot be explained by self-promotion since 37 per cent of all respondents identified Canada among the top five countries in their field, but only 7 per cent (4.4 per cent after weighting) of respondents were from Canada. Similarly, 94 per cent of respondents identified the United States as a top country in their field, yet only 33 per cent (41 per cent after weighting) were from the United States. Furthermore, only 9 per cent of respondents had either worked or studied in Canada, and 28 per cent had no personal experience of, or association with, Canada or Canadian researchers (see Table 5.2). It is reasonable to conclude that the vast majority of respondents based their evaluation of Canadian S&T on its scientific contributions and reputation alone. (p. 65 print, p. 89 PDF)

There is another possible bias  not mentioned in the report and that has to do with answering the question: What do you think my strengths and weaknesses are? If somebody asks you that question and you are replying directly, you are likely to focus on their strong points and be as gentle as possible about their weaknesses. Perhaps the panel should consider having another country ask those questions about Canadian research. We might find the conversation becomes a little more forthright and critical.

Chapter 6 of the report discusses research collaboration which is acknowledged as poorly served by bibliometrics. Of course, collaboration is a strategy which Canadians have succeeded with not least because we simply don’t have the resources to go it alone.

One of the features I quite enjoyed in this report are the spotlight features. For example, there’s the one on stem cell research,

Spotlight on Canadian Stem Cell Research

Stem cells were discovered by two Canadian researchers, Dr. James Till and the late Dr. Ernest McCulloch, at the University of Toronto over 50 years ago. This great Canadian contribution to medicine laid the foundation for all stem cell research, and put Canada firmly at the forefront of this field, an international leadership position that is still maintained.

Stem cell research, which is increasingly important to the future of cell replacement therapy for diseased or damaged tissues, spans many disciplines. These disciplines include biology, genetics, bioengineering, social sciences, ethics and law, chemical biology, and bioinformatics. The research aims to understand the mechanisms that govern stem cell behaviour, particularly as it relates to disease development and ultimately treatments or cures.

Stem cell researchers in Canada have a strong history of collaboration that has been supported and strengthened since 2001 by the Stem Cell Network (SCN) (one of the federal Networks of Centres of Excellence), a network considered to be a world leader in the field. Grants awarded through the SCN alone have affected the work of more than 125 principal investigators working in 30 institutions from Halifax to Vancouver. Particularly noteworthy institutions include the Terry Fox Laboratory at the BC Cancer Agency; the Hotchkiss Brain Institute in Calgary; Toronto’s Hospital for Sick Children, Mount Sinai Hospital, University Health Network, and the University of Toronto; the Sprott Centre for Stem Cell Research in Ottawa; and the Institute for Research in Immunology and Cancer in Montréal. In 2010, a new Centre for the Commercialization of Regenerative Medicine was formed to further support stem cell initiatives of interest to industry partners.

Today, Canadian researchers are among the most influential in the stem cell and regenerative medicine field. SCN investigators have published nearly 1,000 papers since 2001 in areas such as cancer stem cells; the endogenous repair of heart, muscle, and neural systems; the expansion of blood stem cells for the treatment of a variety of blood-borne diseases; the development of biomaterials for the delivery and support of cellular structures to replace damaged tissues; the direct conversion of skin stem cells to blood; the evolutionary analysis of leukemia stem cells; the identification of pancreatic stem cells; and the isolation of multipotent blood stem cells capable of forming all cells in the human blood system. (p. 96 print, p. 120 PDF)

Getting back to the report and my concerns, Chapter 8 on S&T capacity focuses on science training and education,

• From 2005 to 2009, there were increases in the number of students graduating from Canadian universities at the college, undergraduate, master’s and doctoral levels, with the largest increase at the doctoral level.

• Canada ranks first in the world for its share of population with post-secondary education.

• International students comprise 11 per cent of doctoral students graduating from Canadian universities. The fields with the largest proportions of international students include Earth and Environmental Sciences; Mathematics and Statistics; Agriculture, Fisheries, and Forestry; and Physics and Astronomy.

• From 1997 to 2010, Canada experienced a positive migration flow of researchers, particularly in the fields of Clinical Medicine, Information and Communication Technologies (ICT), Engineering, and Chemistry. Based on Average Relative Citations, the quality of researchers emigrating and immigrating was comparable.

• In three-quarters of fields, the majority of top-cited researchers surveyed thought Canada has world-leading research infrastructure or programs. (p. 118 print, p. 142 PDF)

Getting back to more critical matters, I don’t see a reference to jobs in this report. It’s all very well to graduate a large number of science PhDs, which we do,  but what’s the point if they can’t find work?

  • From 2005 to 2009, there were increases in the number of students graduating from Canadian universities at the college, undergraduate, master’s and doctoral levels, with the largest increase at the doctoral level.
  • Canada ranks first in the world for its share of population with post-secondary education.
  • International students comprise 11 per cent of doctoral students graduating from Canadian universities. The fields with the largest proportions of international students include Earth and Environmental Sciences; Mathematics and Statistics; Agriculture, Fisheries, and Forestry; and Physics and Astronomy.
  • From 1997 to 2010, Canada experienced a positive migration flow of researchers, particularly in the fields of Clinical Medicine, Information and Communication Technologies (ICT), Engineering, and Chemistry. Based on Average Relative Citations, the quality of researchers emigrating and immigrating was comparable.
  • In three-quarters of fields, the majority of top-cited researchers surveyed thought Canada has world-leading research infrastructure or programs. (p. 118 print, p. 142 PDF)

The Black Whole blog on the University Affairs website has discussed and continues to discuss the dearth of jobs in Canada for science graduates.

Chapter 9 of the report breaks down the information on a regional (provincial) bases. As you might expect, the research powerhouses are Ontario, Québec, Alberta and BC. Chapter 10 summarizes the material on a field basis, i.e., Biology; Chemistry; Agriculture, Fisheries, and Forestry; Econ0mics; Social Sciences; etc.  and those results were widely discussed at the time and are mentioned in part 1 of this commentary.

One of the most striking results in the report is Chapter 11: Conclusions,

The geographic distribution of the six fields of strength is difficult to determine with precision because of the diminished reliability of data below the national level, and the vastly different size of the research enterprise in each province.

The most reliable data that are independent of size are provincial ARC scores. Using this metric, the leading provinces in each field are as follows:

  • Clinical Medicine: Ontario, Quebec, British Columbia, Alberta
  • Historical Studies: New Brunswick, Ontario, British Columbia
  • ICT: British Columbia, Ontario
  •  Physics and Astronomy: British Columbia, Alberta, Ontario, Quebec
  • Psychology and Cognitive Sciences: British Columbia, Nova Scotia, Ontario
  • Visual and Performing Arts: Quebec [emphasis mine] (p. 193 print, p. 217 PDF)

Canada has an international reputation in visual and performing *arts* which is driven by one province alone.

As for our national fading reputation in natural resources and environmental S&T that seems predictable by almost any informed observer given funding decisions over the last several years.

The report does identify some emerging strengths,

Although robust methods of identifying emerging areas of S&T are still in their infancy, the Panel used new bibliometric techniques to identify research clusters and their rates of growth. Rapidly emerging research clusters in Canada have keywords relating, most notably, to:

• wireless technologies and networking,

• information processing and computation,

• nanotechnologies and carbon nanotubes, and

• digital media technologies.

The Survey of Canadian S&T Experts pointed to personalized medicine and health care, several energy technologies, tissue engineering, and digital media as areas in which Canada is well placed to become a global leader in development and application. (p. 195 print; p. 219 PDF)

I wish I was better and faster at crunching numbers because I’d like to spend time examining the data more closely but the reality is that all data is imperfect so this report like any snapshot is an approximation. Still, I would have liked to have seen some mention of changing practices in science. For example, there’s the protein-folding game, Foldit, which has attracted over 50,000 players (citizen scientists) who have answered questions and posed possibilities that had not occurred to scientists. Whether this trend will continue to disappear is to be answered in the future. What I find disconcerting is how thoroughly this and other shifting practices (scientists publishing research in blogs) and thorny issues such as the highly problematic patent system were ignored. Individual panel members or the report writers themselves may have wanted to include some mention but we’ll never know because the report is presented as a singular, united authority.

In any event, Bravo! to the expert panel and their support team as this can’t have been an easy job.

If you have anything to say about this commentary or the report please do comment, I would love to hear more opinions.

*’arts’ added Jan. 19, 2016.

Council of Canadian Academies tries to answer question: What is the state of Canada’s science culture?

The Council of Canadian Academies is an organization designed to answer questions about science in Canada. From the Council’s About Us webpage on their website,

The Council is an independent, not-for-profit corporation that supports science-based, expert assessments (studies) to inform public policy development in Canada. The Council began operation in 2005 and consists of a Board of Governers, a Scientific Advisory Committee and Secretariat. The Council draws upon the intellectual capital that lies within its three Member Academies the Royal Society of Canada (RSC); the Canadian Academy of Engineering;  and the Canadian Academy of Health Sciences.

Our mission is to contribute to the shaping of evidence-based public policy that is in the public interest. This is achieved by appointing independent, multidisciplinary panels of expert volunteers. The Council’s work encompasses a broad definition of science, incorporating the natural, social and health sciences as well as engineering and the humanities.

Expert Panels directly address the question and sub-questions referred to them. Panel assessments may also identify: emerging issues, gaps in knowledge, Canadian strengths, and international trends and practices. Upon completion, assessments provide government decision-makers, academia and stakeholders with high-quality information required to develop informed and innovative public policy.

Several months ago, Gary Goodyear, Canada’s Minister of State (Science and Technology), requested on behalf of the Canada Science and Technology Museums Corporation (CSTMC), Natural Resources Canada, and Industry Canada an assessment of science culture in Canada. From the State of Canada’s Science Culture webpage on the Council of Canadian Academies website,

Over the past 30 years, public interest and debate has been steadily growing in Canada and abroad over the need to foster a science culture as part of the national science and technology agenda. In this period, significant government and private investments have contributed to the development of hundreds of individual science culture programs and institutions.

Now more than ever the volume of programs and data support the need for a national examination of issues, such as the performance indicators that best reflect the vitality of Canada’s science culture, and a need to understand where Canada ranks internationally. The expert panel will be asked to consider these and other questions such as what factors influence an interest in science among youth; what are the key components of the informal system that supports science culture; and what strengths and weaknesses exist in the Canadian system.

Assessments of science culture can focus either on science in the general culture, or the culture among scientists. This assessment will focus principally on the former, with additional interest in understanding the underlying connections among entrepreneurship, innovation and science. …

The full assessment process includes a rigorous peer review exercise to ensure the report is objective, balanced and evidence-based. Following the review and approval by the Council’s Board of Governors, the complete report will be made available on the Council’s website in both official languages. …

Question

What is the state of Canada’s science culture?

Sub-questions:

  1. What is the state of knowledge regarding the impacts of having a strong science culture?
  2. What are the indicators of a strong science culture? How does Canada compare with other countries against these indicators? What is the relationship between output measures and major outcome measures?
  3. What factors (e.g., cultural, economic, age, gender) influence interest in science, particularly among youth?
  4. What are the critical components of the informal system that supports science culture (roles of players, activities, tools and programs run by science museums, science centres, academic and not-for-profit organizations and the private sector)? What strengths and weaknesses exist in Canada’s system?
  5. What are the effective practices that support science culture in Canada and in key competitor countries?

Hopefully, the expert panel will have a definition of some kind for “science culture.”

After waiting what seems to be an unusually long period, the Council announced the chair for the  “science culture” expert panel (from the CCA Dec. 19, 2012 news release),

Arthur Carty to Serve as Expert Panel Chair on the State of Canada’s Science Culture

The Council is pleased to announce the appointment of Dr. Arthur Carty, O.C., as Chair of the Expert Panel on the State of Canada’s Science Culture. In 2011, the Minister of State (Science and Technology) on behalf of the Canada Science and Technology Museums Corporation (CSTMC), Natural Resources Canada, and Industry Canada requested the Council conduct an in-depth, evidence-based assessment on the state of Canada’s science culture.

As Chair of the Council’s Expert Panel, Dr. Carty will work with a multidisciplinary group of experts, to be appointed by the Council, to address the following question: What is the state of Canada’s science culture?

Dr. Carty is currently the Executive Director of the Waterloo Institute for Nanotechnology at the University of Waterloo. Dr. Carty also serves as Special Advisor to the President on international science and technology collaboration, and as Research Professor in the Department of Chemistry. Prior to this, Dr. Carty served as Canada’s first National Science Advisor to the Prime Minister and to the Government of Canada from 2004-2007 and as President of the National Research Council Canada from 1994-2004.

You can find out more on Carty’s biography webpage, on the CCA website,

Arthur Carty is the Executive Director of the Waterloo Institute for Nanotechnology at the University of Waterloo, Special Advisor to the President on international science and technology collaboration, and Research Professor in the Department of Chemistry

From 2004-2008, Dr. Carty served as Canada’s first National Science Advisor to the Prime Minister and to the Government of Canada. Prior to this appointment, he was President of the National Research Council Canada for 10 years. Before this, he spent 2 years at Memorial University and then 27 years at the University of Waterloo, where he was successively Professor of Chemistry, Director of the Guelph-Waterloo Centre for Graduate Work in Chemistry and Biochemistry, Chair of the Department of Chemistry, and Dean of Research.

….

Carty’s profile page on the Waterloo Institute of Nanotechnology (WIN) website offers the same information but in more detail.

It’s difficult to divine much from the biographical information about Carty as it is very purpose-oriented to impress the reader with Carty’s international and national involvements in the field of science advice and collaboration. Carty may have extensive experience with multi-disciplinary teams and an avid interest in a science culture that includes informal science education and the arts and humanities, unfortunately, it’s not visible on either the CCA or WIN website biographies.

Hopefully,  Carty and the CCA will assemble a diverse expert panel. (Warning: blatant self-promotion ahead) If they are looking for a person of diverse personal and professional interests

  • who has an MA in Creative Writing (nonfiction and fiction) and New Media from De Montfort University in Leicester, UK and
  • a BA (Communication – Honors) from Simon Fraser University in Burnaby, Canada and
  • who has built up one of the largest and longest-running independent science blogs in the country thereby contributing to science culture in Canada,
  • neatly combining the social sciences, the humanities, and an informed perspective on science and science culture in Canada in one person,

they may want to contact me at nano@frogheart.ca. I have more details in the CV and can supply references.

Informing research choices—the latest report from the Canadian Council of Academies (part 2: more details and my comments)

In general, I found this to be a thoughtful report, Canadian Council of Academies (CCA) Informing Research Choices: Indicators and Judgment, and have at the most a few criticisms. Starting with this bit about the Discovery Grants Programme (DGP), funded by Canada’s Natural Sciences and Engineering Research Council, and ‘expert judgment’,

The focus of NSERC on science assessment practices is directed partly by a long-standing concern that the allocation of DGP funding across fields is overly dependent on historical funding patterns, and that future allocations should incorporate other factors such as research quality, changes in the scientific landscape, and the emergence of research fields.

This review of international science assessment reveals a diverse landscape of assessment methods and practices. Two of the lessons emerging from the review are especially relevant to the Panel’s charge. First, the national research context is significant in defining a given science assessment, and no single set of indicators for assessment will be ideal in all circumstances, though evidence gathered from examining experiences of other countries may help inform the development of a science assessment strategy for Canada. Second, there is a global trend towards national science assessment models that incorporate both quantitative indicators and expert judgment. [emphases mine] (p. 31 print version, p. 51 PDF)

Ok, how do we define ‘expert’? Especially in light of the fact that  the report discusses ‘peer’ and ‘expert’ review (p. 50 print version, p. 70 PDF). Here’s a definition (or non definition) of ‘expert review’ from the report,

Following the definition provided by the OECD (2008), the Panel uses the term “expert review” to refer to deliberative evaluation processes based on expert judgment used in the context of evaluations of broader research fields or units. (p. 51 print version, p. 71 PDF)

Tautology, anyone?

The report also describes more quantitative measures such as bibliometrics (how many times and where were your scientists published), amongst others.  From the report,

The simplest bibliometric indicators are those based on publication counts. In principle, such counts can be generated for many different types of publications (e.g., books, book chapters). In practice, due to the limitations of coverage in indexed bibliographic databases, existing indicators are most often based on counts of peer-reviewed articles in scientific journals. Basic publication indicators typically take the form of absolute counts of the number of journal articles for a particular unit (e.g., individual, research group, institution, or field) by year or for a period of years. Such indicators are typically framed as a measure of research output.

Additional indicators based on publication counts can be derived from shares of publication counts (e.g., a research group’s share of total publications in an institution, a field’s share of total publications in a country). These share-based indicators generally are used to capture information about the relative importance of research output originating from a particular unit or field. More advanced indicators based on weighted publication counts can also be created when publication output is typically weighted by some measure of the quality of the research outlet. For example, journal impact factors (a measure of the relative citedness of a journal) may be used to give a higher weight to publications in more prestigious or competitive journals. [emphasis mine] Unlike straight publication counts, these metrics also depend on some other measure of quality, either based on citation or on some other assessment of the relative quality of different journals. (pp. 55-56 print version, pp. 75-76 PDF)

There are more bibliometrics discussed along with some of their shortcomings but, interestingly, no mention of open access publishing and its possible impacts on  ‘prestigious journals’ and on the bibliometrics themselves.

Getting back to my question in part 1 ” I was looking for evidence that the panel would have specific recommendations for avoiding an over-reliance on metrics (which I see taking place and accelerating in many areas not just for science funding).”Interestingly the report makes references to qualitative approaches without ever defining it although the the term ‘quantitative indicators’ is described in the glossary,

Quantitative indicators: any indicators constructed from quantitative data (e.g., counts of publications, citations, students, grants, research funding).

The qualitative approaches mentioned  in the report include ‘expert’ review, peer review, and case studies. Since I don’t understand what they mean by ‘expert’, I’m not sure I understand ‘peer’. As for the case studies, here’s how this approach is described (Note: I have removed a footnote),

The case study is perhaps the most common example of other types of qualitative methods used in research assessment. Case studies are often used to explore the wider socio-economic impacts of research. For example, the U.K. Research Excellence Framework (REF) …

Project Retrosight is a Canadian example of the case study approach used in research assessment. Undertaken as part of a multinational study to evaluate the impact of basic biomedical and clinical cardiovascular and stroke research projects, Project Retrosight measured payback of projects using a sampling framework. [emphasis mine]  Despite several limitations to the analysis (e.g., the number of case studies limiting the sample pool from which to draw observations, potential inconsistencies in reporting and comparability), the case study approach provided an effective platform for evaluating both the how and the why of evidence to demonstrate impact. The key findings of the study revealed a broad and diverse range of impacts, with the majority of broader impacts, socio-economic and other, coming from a minority of projects (Wooding et al., 2011).  (p. 53 print version, p. 73 PDF)

My understanding of the word ‘payback’ is that it’s related to the term ‘return on investment’ and that measure requires  quantitative data. If so, how was the Project Retrosight qualitative? The description in the report doesn’t offer that information.

The conclusion from the final paragraph of the report doesn’t offer any answers,

… quantitative indicators are far from obviating the need for human expertise and judgment in the research funding allocation decision process. Indicators should be used to inform rather than replace expert judgment. Given the inherent uncertainty and complexity of science funding decisions, these choices are best left in the hands of well-informed experts with a deep and nuanced understanding of the research funding contexts in question, and the scientific issues, problems, questions, and opportunities at stake. (p. 104 print version, p. 124 PDF)

I very much appreciate the approach the ‘expert’ panel took and the thoughtful nature of the report  but I feel it falls short. The panel offers an exhortation but no recommendations for ensuring that science funding decisions don’t become entirely reliant on metrics; they never do describe what they mean by ‘expert’ or explain the difference between qualitative and quantitative;’ and there’s no mention of ‘trends/disruptive developments’ such as open access publishing, which could have a powerful impact on the materials ‘experts’ use when making their research allocation decisions.

The full report, executive summary, abridged report, appendices,  news release and media backgrounder are available here.

ETA July 9, 2012 12:40 PST: There’s an interview (audio or text depending on your preferences) with Rita Colwell the report’s expert panel at the Canadian Science Policy Centre website here.

Informing research choices—the latest report from the Canadian Council of Academies (part 1: report conclusions and context)

The July 5, 2012 news release from the Canadian Council of Academies (CCA) notes this about the Informing Research Choices: Indicators and Judgment report,

An international expert panel has assessed that decisions regarding science funding and performance can’t be determined by metrics alone. A combination of performance indicators and expert judgment are the best formula for determining how to allocate science funding.

The Natural Sciences and Engineering Research Council of Canada (NSERC) spends approximately one billion dollars a year on scientific research. Over one-third of that goes directly to support discovery research through its flagship Discovery Grants Program (DGP). However, concerns exist that funding decisions are made based on historical funding patterns and that this is not the best way to determine future funding decisions.

As NSERC strives to be at the leading edge for research funding practices, it asked the Council of Canadian Academies to assemble an expert panel that would look at global practices that inform funding allocation, as well as to assemble a library of indicators that can be used when assessing funding decisions. The Council’s expert panel conducted an in-depth assessment and came to a number of evidence-based conclusions.

The panel Chair, Dr. Rita Colwell commented, “the most significant finding of this panel is that quantitative indicators are best interpreted by experts with a deep and nuanced understanding of the research funding contexts in question, and the scientific issues, problems, questions and opportunities at stake.” She also added, “Discovery research in the natural sciences and engineering is a key driver in the creation of many public goods, contributing to economic strength, social stability, and national security. It is therefore important that countries such as Canada have a complete understanding of how best to determine allocations of its science funding.”

… Other panel findings discussed within the report include: a determination that many science indicators and assessment approaches are sufficiently robust; international best practices offer limited insight into science indicator use and assessment strategies; and mapping research funding allocation directly to quantitative indicators is far too simplistic, and is not a realistic strategy for Canada. The Panel also outlines four key principles for the use of indicators that can guide research funders and decision-makers when considering future funding decisions.

The full report, executive summary, abridged report, appendices,  news release, and media backgrounder are available here.

I have taken a look at the full report and, since national funding schemes for the Natural Sciences and Engineering Research Council (and other science funding agencies of this ilk) are not not my area of expertise, the best I can offer is an overview from interested member of the public.

The report provides a very nice introduction to the issues the expert panel was addressing,

The problem of determining what areas of research to fund permeates science policy. Nations now invest substantial sums in supporting discovery research in natural sciences and engineering (NSE). They do so for many reasons. Discovery research helps to generate new technologies; to foster innovation and economic competitiveness; to improve quality of life; and to achieve other widely held social or policy objectives such as improved public health and health care, protection of the environment, and promotion of national security. The body of evidence on the benefits that accrue from these investments is clear: in the long run, public investments in discovery-oriented research yield real and tangible benefits to society across many domains.

These expenditures, however, are accompanied by an obligation to allocate public resources prudently. In times of increasing fiscal pressures and spending accountability, public funders of research often struggle to justify their funding decisions — both to the scientific community and the wider public. How should research funding agencies allocate their budgets across different areas of research? And, once allocations are made, how can the performance of those investments be monitored or assessed over time? These have always been the core questions of science policy, and they remain so today

Such questions are notoriously difficult to answer; however, they are not intractable. An emerging “science of science policy” and the growing field of scientometrics (the study of how to measure, monitor, and assess scientific research) provide quantitative and qualitative tools to support research funding decisions. Although a great deal of controversy remains about what and how to measure, indicatorbased assessments of scientific work are increasingly common. In many cases these assessments indirectly, if not directly, inform research funding decisions.

In some respects, the primary challenge in science assessment today is caused more by an overabundance of indicators than by a lack of them. The plethora of available indicators may make it difficult for policy-makers or research funders to determine which metrics are most appropriate and informative in specific contexts. (p. 2 print version, p. 22 PDF)

Assessment systems tied to the allocation of public funds can be expected to be contentious. Since research funding decisions directly affect the income and careers of researchers, assessment systems linked to those decisions will invariably have an impact on researcher behaviour. Past experiences with science assessment initiatives have sometimes yielded unintended, and undesirable, impacts. In addition, poorly constructed or misused indicators have created scepticism among many scientists and researchers about the value and utility of these measures. As a result, the issues surrounding national science assessment initiatives have increasingly become contentious. In the United Kingdom and Australia, debates about national research assessment have been highly publicized in recent years. While such attention is testimony to the importance of these assessments, the occasionally strident character of the public debate about science metrics and evaluation can impede the development and adoption of good public policy. (p. 3 print version, p. 23 PDF)

Based on this introduction and the acknowledgement that there are ‘too many metrics’, I was looking for evidence that the panel would have specific recommendations for avoiding an over-reliance on metrics (which I see taking place and accelerating in many areas, not just science funding).

In the next section however, the report focussed on how the expert panel researched this area. They relied on a literature survey (which I’m not going to dwell on) and case studies of the 10 countries they reviewed in depth. Here’s more about the case studies,

The Panel was charged with determining what the approaches used by funding agencies around the world had to offer about the use of science indicators and related best practices in the context of research in the NSE. As a result, the Panel developed detailed case studies on 10 selected countries. The purpose of these case studies was two-fold: (i) to ensure that the Panel had a fully developed, up-to-date understanding of indicators and practices currently used around the world; and (ii) to identify useful lessons for Canada from the experiences of research funding agencies in other countries. Findings and instructive examples drawn from these case studies are highlighted and discussed throughout this report. Summaries of the 10 case studies are presented in Appendix A

The 10 countries selected for the case studies satisfied one or more of the following four criteria established by the Panel:

Knowledge-powerful countries: countries that have demonstrated sustained leadership and commitment at the national level to fostering science and technology and/or supporting research and development in the NSE.

Leaders in science assessment and evaluation: countries that have notable or distinctive experience at the national level with use of science indicators or administration of national science assessment initiatives related to research funding allocation.

Emerging science and technology leaders: countries considered to be emerging “knowledge-powerful” countries and in the process of rapidly expanding support for science and technology, or playing an increasingly important role in the global context of research in the NSE.

Relevance to Canada: countries known to have special relevance to Canada and NSERC because of the characteristics of their systems of government or the nature of their public research funding institutions and mechanisms. (pp. 8-9 print version, pp. 28-29 PDF)

The 10 countries they studied closely are:

  • Australia
  • China
  • Finland
  • Germany
  • the Netherlands
  • Norway
  • Singapore
  • South Korea
  • United Kingdom (that’s more like four countries: Scotland, England, Wales, and Northern Ireland)
  • United States

The panel did also  examine other countries’ funding schemes but not with the same intensity. I didn’t spend a lot of time on the case studies as they were either very general or far too detailed for my interests. Of course, I’m not the target audience.

The report offers a glossary and I highly recommend reading it in full  because the use of language in these report is not necessarily standard English. Here’s an excerpt,

The language used by policy-makers sometimes differs from that used by scientists. [emphasis mine] Even within the literature on science assessment, there can be inconsistency in the use of terms. For purposes of this report, the Panel employed the following definitions:*

Discovery research: inquiry-driven scientific research. Discovery research is experimental or theoretical work undertaken primarily to acquire new knowledge of the underlying foundations of phenomena and observable facts, without application or intended use (based on the OECD definition of “basic research”in OECD, 2002).

Assessment: a general term denoting the act of measuring performance of a field of research in the natural sciences and engineering relative to appropriate international or global standards. Assessments may or may not be connected to funding allocation, and may or may not be undertaken in the context of the evaluation of programs or policies.

Scientometrics: the science of analyzing and measuring science, including all quantitative aspects and models related to the production and dissemination of scientific and technological knowledge (De Bellis, 2009).

Bibliometrics: the quantitative indicators, data, and analytical techniques associated with the study of patterns in publications. In the context of this report, bibliometrics refers to those indicators and techniques based on data drawn from publications (De Bellis, 2009). (p. 10 print version, p. 30 PDF)

Next up: my comments and whether or not I found specific recommendations on how to avoid over-reliance on metrics.

Nanotechnology and the Council of Canadian Academies assessment report

I started discussing the Council of Canadian Academies and its mid-term assessment report (Review of the Council of Canadian Academies; Report from the External Evaluation Panel 2010) yesterday and will finish today with my thoughts on the assessment of the Council’s nanotechnology report and its impact.

Titled Small is Different: A Science Perspective of the Regulatory Challenges on the Nanoscale (2008), the Council’s report is one of the best I’ve read. I highly recommend it to anyone who wants an introduction to some of the issues (and was much struck by its omission from the list of suggested nanotechnology readings that Peter Julian [Canadian MP] offered in part 2 of his interview).  Interestingly, the Council’s nanotechnology report is Case Study No. 3 in the mid-term expert panel assessment report’s Annex 6 (p. 33 in the print version and p. 37 in PDF).

Many respondents were concerned that Health Canada has made no response to, or use of, this report. However, Health Canada respondents were highly enthusiastic about the assessment and the ways in which it is being used to inform the department’s many – albeit still entirely internal – regulatory development activities: “We’ve all read it and used it. The fact that we haven’t responded to the outside is actually a reflection of how busy we’ve been responding to the file on the inside!” [emphases mine]

The report has been particularly valuable in providing a framework to bring together Health Canada’s five – very different – regulatory regimes to identify a common approach and priorities. The sponsor believes the report’s findings have been well-incorporated into its draft working definition of nanomaterials, [emphasis mine] its work with Canadian and international standards agencies, its development of a regulatory framework to address shorter- and longer-term needs, and its creation of a research agenda to aid the development of the science needed to underpin the regulation of nanomaterials in Canada.

I think the next time somebody confronts me as to why I haven’t responded externally to some notice (e.g., paid my strata fees), I’ll assure them that I’ve been ‘responding on the inside’. (Sometimes I cannot resist the low-hanging fruit and I just have to take a bite.)

As for the second paragraph where they claim that Health Canada has incorporated suggestions from the report for its nanomaterials definition, that’s all well and good but the thinking is changing and Health Canada doesn’t seem to be responding (or to even be aware of the fact). Take a look at the proposed definition in the current draft bill before the US Senate where in addition to size, they mention shape, reactivity, and more as compared the Health Canada 1 to 100 nm. size definition. (See details in this posting from earlier in the week where I compare the proposed US and Canadian definitions.)

Additionally, I think they need to find ways to measure impact that are quantitative as well as this qualitative approach, which itself needs to be revised. Quantitative measures could include the numbers of reports disseminated in print and online, social networking efforts (if any), number of times reports are mentioned in the media, etc. They may also want to limit case studies in future reports so they can provide more depth. The comment about the ‘internal’ impact could have been described at more length. How have the five different Health Canada regulatory regimes come together? Has something substantive occurred?

Finally, it’s hard to know if the Julian’s failure to mention the council’s report in his list of nanotechnology readings is a simple failure of memory or a reflection of the Council’s “invisibility”. I’m inclined to believe that it’s the latter.

Science advice and technology assessment in Canada?

Thank you to the folks at The Black Hole blog for their very incisive post about the recent (released April 28, 2010) mid-term assessment report of the Council of Canadian Academies (CCA). Here’s a brief excerpt from what The Black Hole posting,

Created in 2005, the Council of Canadian Academies is a not-for-profit corporation that supports science-based, expert assessments to inform public policy development in Canada. It was created with $30 million seed funding from Government which expires in 2015 and just underwent its midterm assessment last week. The report was generally positive and indeed to the casual reader it would appear the CCA has a lot to be proud of and not much to worry about. Digging a little deeper though, one gets the feeling that the CCA is facing a critical juncture in its existence and faces the very real possibility of becoming a heck of a lot less effective in 2015.

The blogger, Dave, goes on to explain that the concerns arise from the CCA’s “lack of visibility” and its “dependence on government sponsors” (I assume this means funding). Given that the CCA is the only agency that provides comprehensive science advice for Canada, this could mean the loss of a very singular resource will  in the foreseeable future.

In looking at the report very briefly I too noticed a few things that rang warning bells. From the report (p. 9 in print version, p. 13 in PDF),

Recognizing that a great deal of Canada’s intellectual capital lies within the country’s three Academies – the RSC: The Academies of Arts, Humanities and Sciences of Canada, the Canadian Academy of Engineering, and the Canadian Academy of Health Sciences – these organizations were designated the founding members of the Council. The relationship between the Council and its three Member Academies, however, has not [emphases mine] been as productive or cooperative as it could be.

As far as I’m concerned there’s no chance for survival if the CCA can’t develop a good working relationship with its academies. Further, this working relationship will determine the success of the CCA’s efforts to address its “invisibility.” In the report there are three recommendations for communication efforts to make the CCA more visible (p. 13 in print version, p. 16  in PDF),

RECOMMENDATIONS

14. The Board should lead the development of a new communications strategy that builds on the Council’s considerable assets: its reputation, quality product, enthusiastic panellists and scientific advisors, and its key partners, the Academies.

15. The Council should empower and support this broadened scope of voices to engage with a wide range of key stakeholders who could be identifying topics and/or making use of their findings.

16. The Council should continue to seek opportunities to work with the Academies to contribute to international science advisory bodies.

All of there recommendations are reliant on support from the member academies.

On another note, I find the complete and utter of lack interest in communication efforts to the general public fascinating (I’ve skimmed through the report and have to spot anything that concretely addresses it). They are unrelentingly focused on experts and policy makers. I understand that public outreach is not part of the official mandate but the CCA does release reports to the media and arguably they would like their reports to have some impact on the larger society. They might even be interested in public support when the next federal budget that will have an impact on their activities is due or if they try to increase revenue streams to include something other than government funding. At the very least, they should acknowledge the presence of a much larger world around them and their stakeholders (how do they define stakeholders, anyway? aren’t Canadian citizens stakeholders?).

This indifference to the Canadian citizenry contrasts mightily with the approach Richard Sclove (mentioned in this posting earlier today) is trying to implement with regard to technology assessment in the US. In fact, the indifference contrasts with material I see that comes from the US, the UK, and from the European Community.