Tag Archives: Nature

How much more nanomaterial safety discussion do we need?

The report (Impact of Engineered Nanomaterials on Health: Considerations for Benefit-Risk Assessment) from Joint Research Centre (JRC) and the European Academies Science Advisory Council (EASAC) was issued in Sept. 2011 and the authors are still trying to get people to read it. The Aug. 16, 2012 online issue of Nature features correspondence from the authors citing the report,

Our analysis indicates that formulation of a coherent public policy will depend on scientists closing knowledge gaps in safety research, on gathering more data to connect science and regulation, and on training graduate students in nanotechnology research. Policies will need to be flexible to accommodate fresh discoveries in this rapidly advancing technology.

Getting notice for your work can be hugely difficult in an information-rich environment, so it’s not unusual to see efforts continuing over a year or more after publication.  Meanwhile a question persists, how many reports of this type do we need?

Free speech—update on defamation & science in the UK

I’m glad to have found an update on the UK science libel cases that I have mentioned here (El Naschie in my Nov. 18, 2011 posting and Simon Singh in my Nov. 12, 2010 posting).

Niri Shanmuganathan and Timothy Pinto have co-posted the update  and an analysis of the current defamation bill being considered before the UK Parliament. First, the updates, from the July 9, 2012 co-posting on the Guardian Science blogs,

On Friday, the eminent scientific journal Nature successfully defended an article it had published in 2008. The article had criticised Professor El Naschie for, among other things, publishing an excessive number of articles written by him in the very journal in which he was the editor and not submitting them through an adequate independent peer-review process. It took Nature more than three years to prove its article was accurate, included matters of honest opinion, and was the result of responsible journalism on a matter of public interest.

The science writer Simon Singh was sued by the British Chiropractic Association [BCA] for criticising chiropractic therapy in an article he wrote in the Guardian in 2008. He appears to have been faced with a choice of apologising or instructing (ie paying) libel lawyers to defend him. Singh chose the courageous path and took the financial risk. Fortunately for him, in 2010 the court of appeal (reversing the judge) found that Singh’s comments were statements of opinion, rather than fact. The BCA then dropped its case against Singh.

Shanmuganathan and Pinto note that libel laws currently in place are adequate  once applied in court but the problem lies with the ease that an aggrieved party can file suit. When the burden of proof lies on the defendant to prove that their comments are in the public interest (which is difficult), it’s very, very expensive and time-consuming to fight a court case. One of the consequences is that free speech is likely to become constrained as individual and institutions without the resources avoid making comments that might offend. Here’s what Pinto and Shanmuganathan have to say about the current bill before Parliament,

This problem has not gone unnoticed by politicians, and libel law is currently undergoing reform to try to swing the balance more in favour of free speech. The defamation bill recently had its first reading in the House of Commons. It has two specific provisions to help protect freedom of expression in the field of science. First, independently peer-reviewed articles in a scientific or academic journal, and reports of such articles, would be privileged. Second, fair and accurate reports of scientific or academic conferences would also be privileged. …

However, these worthy libel law reformers are missing the point when it comes to science. Scientists do not usually get sued for writing peer-reviewed articles. Similarly, scientific publishers do not usually get sued for reporting on what happened at a scientific conference.

I recommend reading their comments in full not only for the valuable insight but because the writers have a special relationship to one of the cases. Niri Shanmuganathan and Timothy Pinto are media lawyers at international law firm Taylor Wessing which represented Nature in the libel case brought by Professor El Naschie.

Opening it all up (open software, Nature, and Naked Science)

I’m coming back to the ‘open access’ well this week since there’ve been a few new developments since my massive May 28, 2012 posting on the topic.

A June 5, 2012 posting by Glyn Moody at the Techdirt website brought yet another aspect of ‘open access’ to my attention,

Computers need software, and some of that software will be specially written or adapted from existing code to meet the particular needs of the scientists’ work. This makes computer software a vital component of the scientific process. It also means that being able to check that code for errors is as important as being able to check the rest of the experiment’s methodology. And yet very rarely can other scientists do that, because the code employed is not made available.

That’s right,  there’s open access scientific software.

Meanwhile over at the Guardian newspaper website, Paul Campbell, Nature journal’s editor-in-chief,  notes that open access to research is inevitable in a June 8, 2012 article by Alok Jha,

Open access to scientific research articles will “happen in the long run”, according to the editor-in-chief of Nature, one of the world’s premier scientific journals.

Philip Campbell said that the experience for readers and researchers of having research freely available is “very compelling”. But other academic publishers said that any large-scale transition to making research freely available had to take into account the value and investments they added to the scientific process.

“My personal belief is that that’s what’s going to happen in the long run,” said Campbell. However, he added that the case for open access was stronger for some disciplines, such as climate research, than others.

Campbell was speaking at a briefing hosted by the Science Media Centre.  Interestingly, ScienceOnline Vancouver’s upcoming (June 12, 2012, 6:30 pm mingling starts, 7-9 pm PDT for the panel discussion) meeting about open access (titled, Naked Science; Excuse me: your science is showing) features a speaker from Canada’s Science Media Centre (from the event page),

  1. Heather Piwowar is a postdoc with Duke University and the Dept of Zoology at UBC.  She’s a researcher on the NSF-funded DataONE and Dryad projects, studying data.  Specifically, how, when, and why do scientists publicly archive the datasets they collect?  When do they reuse the data of others?  What related policies and tools would help facilitate more efficient and effective use of data resources?  Heather is also a co-founder of total-impact, a web application that reveals traditional and non-traditional impact metrics of scholarly articles, datasets, software, slides, and blog posts.
  2. Heather Morrison is a Vancouver-based, well-known international open access advocate and practitioner of open scholarship, through her blogs The Imaginary Journal of Poetic Economics http://poeticeconomics.blogspot.com and her dissertation-blog http://pages.cmns.sfu.ca/heather-morrison/
  3. Lesley Evans Ogden is a freelance science journalist and the Vancouver media officer for the Science Media Centre of Canada. In the capacity of freelance journalist, she is a contributing science writer at Natural History magazine, and has written for a variety of publications including YES Mag, Scientific American (online), The Guardian, Canadian Running, and Bioscience. She has a PhD in wildlife ecology, and spent more than a decade slogging through mud and climbing mountains to study the breeding and winter ecology of migratory birds. She is also an alumni of the Science Communications program at the Banff Centre. (She will be speaking in the capacity of freelance journalist).
  4. Joy Kirchner is the Scholarly Communications Coordinator at University of British Columbia where she heads the University’s developing Copyright office in addition to the Scholarly Communications office based in the Library. Her role involves coordinating the University’s copyright education services, identifying recommended and sustainable service models to support scholarly communication activities on the campus and coordinating formalized discussion and education of these issues with faculty, students, research and publishing constituencies on the UBC campus. Joy has also been instrumental in working with faculty to host their open access journals through the Library’s open access journal hosting program; she was involved in the implementation and content recruitment of the Library’s open access  institutional repository, and she was instrumental in establishing the Provost’s Scholarly Communications Steering Committee and associated working groups where she sits as a key member of the Committee looking into an open access position at UBC amongst other things..  Joy is also chair of UBC’s Copyright Advisory Committee and working groups. She is also a faculty member with the Association of Research Libraries (ARL) / Association of College and Research Libraries (ACRL) Institute for Scholarly Communication, she assists with the coordination and program development of ACRL’s much lauded Scholarly Communications Road Show program, she is a Visiting Program Officer with ACRL in support of their scholarly communications programs, and she is a Fellow with ARL’s Research Library Leadership Fellows executive program (RLLF). Previous positions includes Librarian, for Collections, Licensing & Digital Scholarship (UBC), Electronic Resources Coordinator (Columbia Univ.), Medical & Allied Health Librarian and Science & Engineering Librarian. She holds a BA and an MLIS from the University of British Columbia.

I’m starting to get the impression that there is a concerted communications effort taking place. Between this listing and the one in my May 28, 2012 posting, there are just too many articles and events occurring to be purely chance.

Scientific spat and libel case in UK has Canadian connection

Neil Turok, Director of the Perimeter Institute of Theoretical Physics located in Waterloo, Canada, has been described as being insufficiently qualified to assess a fellow scientist’s work. Alok Jha, science correspondent for the UK’s Guardian newspaper, writes about the situation which includes a libel suit against Nature magazine in his Nov. 18, 2011 article,

A scientist who is suing one of the world’s most prominent scientific journals for libel compared himself to Albert Einstein in the high court on Friday [Nov. 18, 2011] as part of his evidence against the journal. Professor Mohamed El Naschie, also claimed that an eminent physicist brought in by the journal as an expert witness to analyse the value of his work was not sufficiently qualified to do so.

El Naschie is suing Nature as a result of a news article published in 2008, after the scientist’s retirement as editor-in-chief of the journal Chaos, Solitons and Fractals. The article alleged that El Naschie had self-published several research papers, some of which did not seem to have been peer reviewed to an expected standard and also said that El Naschie claimed affiliations and honorary professorships with international institutions that could not be confirmed by Nature. El Naschie claims the allegations in the article were false and had damaged his reputation.

On Friday, Nature called Professor Neil Turok, a cosmologist and director of the Perimeter Institute in Canada, as an expert witness to assess some of the work published by El Naschie.

In his evidence, Turok said he found it difficult to understand the logic in some of El Naschie’s papers. The clear presentation of scientific ideas was an important step in getting an idea accepted, he said. “There are two questions – one is whether the work is clearly presented and readers would be able to understand it. It would be difficult for a trained theoretical physicist to understand [some of El Naschie’s papers]. …  The second question is about the correctness of the theory and that will be decided by whether it agrees with experiments. Most theories in theoretical physics are speculative – we form a logical set of rules and deductions and we try, ultimately, to test the deductions in experiments.

There’s more at stake here than whether or not Turok is qualified or El Naschie’s work is up to the standards in his field, this is also about libel and libel laws in England. There have been some intended consequences from the current set of laws. Here’s an excerpt from the Wikipedia essay,

Libel tourism is a term first coined by Geoffrey Robertson to describe forum shopping for libel suits. It particularly refers to the practice of pursuing a case in England and Wales, in preference to other jurisdictions, such as the United States, which provide more extensive defences for those accused of making derogatory statements. According to the English publishing house Sweet & Maxwell, the number of libel cases brought by people alleged to be involved with terrorism almost tripled in England between 2006 and 2007.

Jha goes on to finish his first article on El Naschie’s libel case with this,

Sile Lane, a spokesperson for the Libel Reform campaign said: “Scientists expect publications like Nature to investigate and write about controversies within the scientific community. The threat of libel action is preventing scientific journals from discussing what is good and bad science. This case is another example of why we need libel law that has a clear strong public interest defence and a high threshold for bringing a case. The government has promised to reform the libel laws and this can’t come soon enough.”

I last wrote about the libel situation in the UK in my Nov. 12, 2010 posting, International call to action on libel laws in the UK.

Princeton goes Open Access; arXiv is 10 years old

Open access to science research papers seems only right given that most Canadian research is publicly funded. (As I understand it most research worldwide is publicly funded.)

This week, Princeton University declared that their researchers’ work would be mostly open access (from the Sept. 28, 2011 news item on physrog.com),

Prestigious US academic institution Princeton University has banned researchers from giving the copyright of scholarly articles to journal publishers, except in certain cases where a waiver may be granted.

Here’s a little more from Sunanda Creagh’s (based in Australia) Sept.28, 2011 posting on The Conversation blog,

The new rule is part of an Open Access policy aimed at broadening the reach of their scholarly work and encouraging publishers to adjust standard contracts that commonly require exclusive copyright as a condition of publication.

Universities pay millions of dollars a year for academic journal subscriptions. People without subscriptions, which can cost up to $25,000 a year for some journals or hundreds of dollars for a single issue, are often prevented from reading taxpayer funded research. Individual articles are also commonly locked behind pay walls.

Researchers and peer reviewers are not paid for their work but academic publishers have said such a business model is required to maintain quality.

This Sept. 29, 2011 article by James Chang for the Princetonian adds a few more details,

“In the interest of better disseminating the fruits of our scholarship to the world, we did not want to put it artificially behind a pay wall where much of the world won’t have access to it,” committee chair and computer science professor Andrew Appel ’81 said.

The policy passed the Faculty Advisory Committee on Policy with a unanimous vote, and the proposal was approved on Sept. 19 by the general faculty without any changes.

A major challenge for the committee, which included faculty members in both the sciences and humanities, was designing a policy that could comprehensively address the different cultures of publication found across different disciplines.

While science journals have generally adopted open-access into their business models, humanities publishers have not. In the committee, there was an initial worry that bypassing the scholarly peer-review process that journals facilitate, particularly in the humanities, could hurt the scholarly industry.

At the end, however, the committee said they felt that granting the University non-exclusive rights would not harm the publishing system and would, in fact, give the University leverage in contract negotiations.

That last comment about contract negotiations is quite interesting as it brings to mind the California boycott of the Nature journals last year when Nature made a bold attempt to raise subscription fees substantively (400%) after having given the University of California special deals for years (my June 15, 2010 posting).

Creagh’s posting features some responses from Australian academics such as Simon Marginson,

Having prestigious universities such as Princeton and Harvard fly the open access flag represented a step forward, said open access advocate Professor Simon Marginson from the University of Melbourne’s Centre for the Study of Higher Education.

“The achievement of free knowledge flows, and installation of open access publishing on the web as the primary form of publishing rather than oligopolistic journal publishing subject to price barriers, now depends on whether this movement spreads further among the peak research and scholarly institutions,” he said.

“Essentially, this approach – if it becomes general – normalises an open access regime and offers authors the option of opting out of that regime. This is a large improvement on the present position whereby copyright restrictions and price barriers are normal and authors have to attempt to opt in to open access publishing, or risk prosecution by posting their work in breach of copyright.”

“The only interests that lose out under the Princeton proposal are the big journal publishers. Everyone else gains.”

Whether you view Princeton’s action as a negotiating ploy and/or a high minded attempt to give freer access to publicly funded research,  this certainly puts pressure on the business models that scholarly publishers follow.

arXiv, celebrating its 10th anniversary this year, is another open access initiative although it didn’t start that way. From the Sept. 28, 2011 news item on physorg.com,

“I’ve heard a lot about how democratic the arXiv is,” Ginsparg [Paul Ginsparg, professor of physics and information science] said Sept. 23 in a talk commemorating the anniversary. People have, for example, praised the fact that the arXiv makes scientific papers easily available to scientists in developing countries where subscriptions to journals are not always affordable. “But what I was trying to do was set up a system that eliminated the hierarchy in my field,” he said. As a physicist at Los Alamos National Laboratory, “I was receiving preprints long before graduate students further down the food chain,” Ginsparg said. “When we have success we like to think it was because we worked harder, not just because we happened to have access.”

Bill Steele’s Sept. 27, 2011 article for Cornell Univesity’s ChronicleOnline notes,

One of the surprises, Ginsparg said, is that electronic publishing has not transformed the seemingly irrational scholarly publishing system in which researchers give their work to publishing houses from which their academic institutions buy it back by subscribing to journals. Scholarly publishing is still in transition, Ginsparg said, due to questions about how to fund electronic publication and how to maintain quality control. The arXiv has no peer-review process, although it does restrict submissions to those with scientific credentials.

But the lines of communication are definitely blurring. Ginsparg reported that a recent paper posted on the arXiv by Alexander Gaeta, Cornell professor of applied and engineering physics, was picked up by bloggers and spread out from there. The paper is to be published in the journal Nature and is still under a press embargo, but an article about it has appeared in the journal Science.

Interesting, eh? It seems that scholarly publishing need not disappear but there’s no question its business models are changing.

Vancouver: very liveable but not attractive to scientists?

The journal Nature has an intriguing article by Richard Van Noorden titled, Cities: Building the best cities for science; Which urban regions produce the best research — and can their success be replicated? It’s an attempt to synthesize research on what makes certain cities notable for scientific achievement and ways to duplicate that success elsewhere.

Given the discussion about Canada’s scientific achievements combined with our perceived lack of innovation, I was curious as to whether any Canadian cities (particularly Vancouver) might be mentioned and in what context. First, here’s the story behind the research on ‘scientific’ cities (from the article),

When the Øresund bridge connecting Copenhagen, Denmark, with Malmö, Sweden, opened in 2000, both sides had much to gain. Sweden would get a physical connection to the rest of mainland Europe; residents of Copenhagen would have access to cheaper homes close to the city; and economic cooperation would increase. But Christian Matthiessen, a geographer at the University of Copenhagen, saw another benefit — the joining of two burgeoning research areas. “Everyone was talking about the transport of goods and business connections,” he says, “and we argued that another benefit would be to establish links between researchers.”

Ten years later, those links seem to be strong. The bridge encouraged the establishment of the ‘Øresund region’, a loose confederation of nine universities, 165,000 students and 12,000 researchers. Co-authorship between Copenhagen and the southernmost province of Sweden has doubled, says Matthiessen. The collaborations have attracted multinational funds from the European Union. And the European Spallation Source, a €1.4-billion (US$2-billion) neutron facility, is on track to begin construction in Lund, Sweden, in 2013.

The region’s promoters claim that it is emerging as a research hub of northern Europe, aided in part by construction of the bridge. For Matthiessen, the bridge also inspired the start of a unique research project — to catalogue the growth and connections of geographical clusters of scientific productivity all over the world. [emphases mine]

It’s not hard to believe that other cities and regions are eager to emulate the Copenhagen/Malmö experience. Van Noorden’s article synthesizes Mathiesson’s research with research done for Nature by Elsevier to find some similar results, for example, Boston scores high while Beijing’s scientific output is increasing.

As for Vancouver,

Moreover, cities generally held to be the most ‘liveable’ in surveys — Vancouver and various urban centres in Canada and Australia — are often not associated with outstanding creativity [scientists are included as ‘creatives’ as defined by academics such as Richard Florida at the University of Toronto], says Peter Hall, a geographer at University College London. [emphases mine]

Van Noorden does not explore the question of why the most  ‘liveable’ cities “are often not associated with outstanding creativity.”

I’m reminded of the excitement over the recruitment of the Canada Excellence Research Chairs (my May 20, 2010 posting) and am suggesting that, like liveability, attracting world class researchers does not necessarily lead to the creative scientific and technological results hoped for so dearly.

As the article points there are many factor influencing why the rise and fall of ‘science’ cities,

Many factors are out of the hands of urban planners and local policy- makers, however, and more sophisticated spatial scientometrics studies into why and where scientists cluster geographically could help to explain the influence of these factors. The evolution of a metropolitan region such as Øresund [Copenhagen/Malmö] was shaped by national and international policies and economics. National policies, for example, have largely determined the evolution of science cities in France, Spain, Portugal, South Africa and Russia in the past few decades by pushing money, and by extension scientists, into smaller cities in need of a boost.

Researchers such as Michel Grossetti at the University of Toulouse (France), are attempting sophisticated analyses to get at the heart of why scientists do or do not cluster in certain regions as Van Noorden’s article notes.

I’m not sure what to make of this research simply because there’s been a lot of talk about how the internet and being online has obliterated geography (by working online, you can live wherever you choose as physical proximity is no longer necessary). This research suggests otherwise, i.e., physical or face to face contact is very important.

Graphene, the Nobel Prize, and levitating frogs

As you may have heard, two  scientists (Andre Geim and Konstantin Novoselov) who performed groundbreaking research on graphene [Nov. 29, 2010: I corrected this entry Nov. 26, 2010 which originally stated that these researchers discovered graphene] have been awarded the 2010 Nobel Prize for Physics. In honour of their award, the journal, Nature Materials, is giving free access to  a 2007 article authored by the scientists. From the news item on Nanowerk,

The 2007 landmark article in Nature Materials “The rise of graphene” by the just announced winners of the 2010 Nobel prize in physics, Andre Geim and Kosta Novoselov, has now been made available as a free access article.

Abstract:

Graphene is a rapidly rising star on the horizon of materials science and condensed-matter physics. This strictly two-dimensional material exhibits exceptionally high crystal and electronic quality, and, despite its short history, has already revealed a cornucopia of new physics and potential applications, which are briefly discussed here.

Here’s a description of the scientists and their work from the BBC News article by Paul Rincon,

Prof Geim, 51, is a Dutch national while Dr Novoselov, 36, holds British and Russian citizenship. Both are natives of Russia and started their careers in physics there.

The Nobels are valued at 10m Swedish kronor (£900,000; 1m euros; $1.5m).

They first worked together in the Netherlands before moving to the UK. They were based at the University of Manchester when they published their groundbreaking research paper on graphene in October 2004.

Dr Novoselov is among the youngest winners of a prize that normally goes to scientists with decades of experience.

Graphene is a form of carbon. It is a flat layer of carbon atoms tightly packed into a two-dimensional honeycomb arrangement.

Because it is so thin, it is also practically transparent. As a conductor of electricity it performs as well as copper, and as a conductor of heat it outperforms all other known materials.

The unusual electronic, mechanical and chemical properties of graphene at the molecular scale promise ultra-fast transistors for electronics.

Some scientists have predicted that graphene could one day replace silicon – which is the current material of choice for transistors.

It could also yield incredibly strong, flexible and stable materials and find applications in transparent touch screens or solar cells.

Geim and Novoselov first isolated fine sheets of graphene from the graphite which is widely used in pencils.

A layer of graphite 1mm thick actually consists of three million layers of graphene stacked on top of one another.

The technique that Geim and Novoselov used to create the first graphene sheets both amuses and fascinates me (from the article by Kit Eaton on the Fast Company website),

The two scientists came up with the technique that first resulted in samples of graphene–peeling individual atoms-deep sheets of the material from a bigger block of pure graphite. The science here seems almost foolishly simple, but it took a lot of lateral thinking to dream up, and then some serious science to investigate: Geim and Novoselo literally “ripped” single sheets off the graphite by using regular adhesive tape. Once they’d confirmed they had grabbed micro-flakes of the material, Geim and Novoselo were responsible for some of the very early experiments into the material’s properties. Novel stuff indeed, but perhaps not so unexpected from a scientist (Geim) who the Nobel Committe notes once managed to make a frog levitate in a magnetic field.

I’ll get to the levitating frog in a minute but first the bit about using regular adhesive tape to peel off single sheets only atoms thick of graphite from a larger block of the stuff reminds me of how scientists at Northwestern University are using shrinky dinks (a child’s craft material) to create large scale nanopatterns cheaply (my Aug. 16, 2010 posting).

It’s reassuring to me that despite all of the high tech equipment that costs the earth, scientists still use fairly mundane, inexpensive objects to do some incredibly sophisticated work. The other thing I find reassuring is that Novoselov probably was not voted ‘most likely to be awarded a Nobel Prize’. Interestingly, Novoselov’s partner, Geim, was not welcomed into a physics career with open arms. From the news item on physoorg.com,

Konstantin Novoselov, the Russian-born physicist who shared this year’s Nobel prize, struggled with physics as a student and was awarded a handful of B grades, his university said Wednesday.

The Moscow Physics and Technology University (MFTI) posted report cards on its website for Novoselov, who at 36 won the Nobel prize for physics with his research partner Andre Geim.

The reports reveal that he gained a handful of B grades in his term reports for theoretical and applied physics from 1991 to 1994.

He was also not strong on physical education — a compulsory subject at Russian universities — gaining B grades. And while he now lives in Britain, he once gained a C grade for English.

The university also revealed documents on Nobel prize winner Geim, who studied at the same university from 1976 to 1982. His brilliant academic career was only marred by a few B-grades for Marxist political economy and English.

Geim was turned down when he applied first to another Moscow university specialising in engineering and physics, and worked as a machinist at a factory making electrical instruments for eight months.

Given the increasing emphasis on marks, in Canadian universities at least, I noticed that Novoselov was not a straight-A student. As for Geim, it seems the fact that his father was German posed a problem. (You can find more details in the physorg.com article.)

As for levitating frogs, I first found this information in particle physicist Jon Butterworth’s October 5, 2010 posting on his Guardian blog,

Geim is also well known (or as his web page puts it “notorious”) for levitating frogs. This is a demonstration of the peculiar fact that all materials have some magnetism, albeit very weak in most cases, and that if you put them in a high enough magnetic field you can see the effects – and make them fly.

Why frogs? Well, no frogs were harmed in the experiments. But also, magnetism is a hugely important topic in physics that can seem a little dry to students …

I hunted down a video of the levitating frog on youtube,

As a particle physicist, Butterworth notes that the graphene work is outside his area of expertise so if you’re looking for a good, general explanation with some science detail added in for good measure, I’d suggest reading his succinct description.

California boycott of Nature journals?

It seems the California Digital Library (CDL) which manages subscriptions for the University of  California has been getting a deal on its Nature journal subscriptions and now the publisher, Nature Publishing Group (NPG), has raised their subscription price by approximately 400 percent. Predictably the librarians are protesting the rate hike. Less predictably, they are calling for a boycott.

The Pasco Phronesis blog notes,

The negotiations continue via press releases. Independent of the claims both sides are making, this fight brings out the point that journal subscription rates have continually increased at rates that challenge many universities to keep up. NPG is not the only company charging high rates, it’s just that the long-standing agreement with the CDL has become no longer sustainable for NPG. Given the continuing budget problems California faces, it seems quite likely that the CDL may no longer find NPG subscriptions sustainable.

The article by Jennifer Howard for the Chronicle of Higher Education offers some details about the proposed boycott. In addition to canceling subscriptions,

The voluntary boycott would “strongly encourage” researchers not to contribute papers to those journals or review manuscripts for them. It would urge them to resign from Nature’s editorial boards and to encourage similar “sympathy actions” among colleagues outside the University of California system.

The boycott’s impact on faculty is not something that immediately occurred to me but Dr. Free-Ride at Adventures in Ethics and Science notes,

One bullet point that I think ought to be included above — something that I hope UC faculty and administrators will consider seriously — is that hiring, retention, tenure, and promotion decisions within the UC system should not unfairly penalize those who have opted to publish their scholarly work elsewhere, including in peer-reviewed journals that may not currently have the impact factor (or whatever other metric that evaluators lean on so as not to have to evaluate the quality of scholarly output themselves) that the NPG journals do. Otherwise, there’s a serious career incentive for faculty to knuckle under to NPG rather than honoring the boycott.

There is both support and precedent for such a boycott according to Howard’s article,

Keith Yamamoto is a professor of molecular biology and executive vice dean of the School of Medicine at UC-San Francisco. He stands ready to help organize a boycott, if necessary, a tactic he and other researchers used successfully in 2003 when another big commercial publisher, Elsevier, bought Cell Press and tried to raise its journal prices.

After the letter went out on Tuesday, Mr. Yamamoto received an “overwhelmingly positive” response from other university researchers. He said he’s confident that there will be broad support for a boycott among the faculty if the Nature Group doesn’t negotiate, even if it means some hardships for individual researchers.

“There’s a strong feeling that this is an irresponsible action on the part of NPG,” he told The Chronicle. That feeling is fueled by what he called “a broad awareness in the scientific community that the world is changing rather rapidly with respect to scholarly publication.”

Although researchers still have “a very strong tie to traditional journals” like Nature, he said, scientific publishing has evolved in the seven years since the Elsevier boycott. “In many ways it doesn’t matter where the work’s published, because scientists will be able to find it,” Mr. Yamamoto said.

I feel sympathy for both sides as neither side is doing well economically these days. I do have to wonder at the decision to quadruple the subscription rates overnight as it smacks of a negotiating tactic in a situation where the CDL had come to expect a significantly lowered subscription rate. With this tactic there’s the potential for a perceived win-win situation. The CDL will triumphantly negotiate a lower subscription rate and the publisher will get the increase they wanted in the first place. That’s my theory.

US National Science Foundation on science and communicating about its impact on society and OECD report on innovation as a societal effort

On the heels of last week”s posting about the importance of a broad-ranging approach to science and innovation (See: Rob Annan’s [Don’t leave Canada behind blog] latest post, Innovation isn’t just about science funding and Poetry, molecular biophysics and innovation in Canada on this blog), I found these and other related issues being discussed elsewhere. (Side note: I’d love to hear from anyone who might be able to comment on these issues as they arise in other countries. I get most of my information from Canadian, US, and UK sources so it does tend to be limited.)

Dave Bruggeman at Pasco Phronesis highlights an editorial and an article by Corie Lok about the US National Science Foundation and its efforts to have scientists demonstrate or communicate the broader societal impacts of their research work by making it a requirement in their grant application. From Dave’s posting,

Do read the pieces [published in the journal Nature], because I think the point about developing the infrastructure to support research on broader impacts and the implementation of those broader impacts is a necessary step. With a support system in place, researchers may be more inclined to take the criterion seriously. With infrastructure better able to measure impacts, science advocates may have better data from which to advance their causes. …

While there was some mention of efforts in the U.K. and the European Commission to do similar work in making more explicit the connections between scientific research and broader impacts, I was a bit disappointed that there wasn’t a bit more effort to make a stronger connection of lessons learned both for other countries and for the U.S. This is particularly true if new U.K. Science and Universities Minister Willets goes through with a campaign promise to give the Research Excellence Framework a more thorough review.

I encourage you to read Dave’s posting in its entirety as he adds thoughtful commentary and information about the situation in the US while I focus on other aspects of the issue, from the Nature editorial,

The US National Science Foundation (NSF) is unique among the world’s science-funding agencies in its insistence that every proposal, large or small, must include an activity to demonstrate the research’s ‘broader impacts’ on science or society. This might involve the researchers giving talks at a local museum, developing new curricula or perhaps forming a start-up company. [emphases mine]

The requirement’s goal is commendable. It aims to enlist the scientific community to help show a return on society’s investment in research and to bolster the public’s trust in science — the latter being particularly important given the well-organized movements currently attacking concepts such as evolution and climate change.

I find the notion that starting up a new company is a way of demonstrating research’s broader societal impact rather unexpected and something I like and dislike in equal measure. I can certainly see where it would encourage the kind of innovation that the Canadian government wishes to foster and I can see the benefits. On the other hand, I think there is a very strong focus on the almighty buck to the exclusion of other social benefits as per “show a return on society’s investment in research,” in the editorial excerpt’s 2nd paragraph. You’ll note that ‘fostering trust’ is second and it’s in the service of ensuring that cherished concepts are not attacked. (Aside: While Nature uses evolution and climate change for its examples here, scientists have fought bitterly over other cherished concepts which have over time proved to be incorrect. For years geneticists dismissed some 98% of the human genome as ‘junk DNA’ . It turns out they were wrong. [see this article in New Scientist for more about the importance of ‘junk DNA’])

As for the focus on ‘society’s return on its research investment’, there’s this from Corie Lok’s Nature article,

Research-funding agencies are forever trying to balance two opposing forces: scientists’ desire to be left alone to do their research, and society’s demand to see a return on its investment. [emphasis mine]

The European Commission, for example, has tried to strike that balance over the past decade by considering social effects when reviewing proposals under its various Framework programmes for research. And the Higher Education Funding Council for England announced last year that, starting in 2013, research will be assessed partly on its demonstrable benefits to the economy, society or culture.

But no agency has gone as far as the US National Science Foundation (NSF), which will not even consider a proposal unless it explicitly includes activities to demonstrate the project’s ‘broader impacts’ on science or society at large. “The criterion was established to get scientists out of their ivory towers and connect them to society,” explains Arden Bement, director of the NSF in Arlington, Virginia.

Here there seems to be a softening of the “return on investment” focus on money and the economy to include “broader impacts” on society and culture. Since the phrase ‘return on investment’ comes from the financial services sector, the meaning will default, unless carefully framed, to financial and economic considerations only.

I guess the question I have is, how do we value broader impacts? I’m a scientist, Get me out of here is a public engagement programme I’ve mentioned before (towards the end of this posting). How do you measure the outcome for a programme where kids stay after school to chat online with scientists about science? Sure you can measure how many kids participate and whether more of them indicate an interest in studying science but these are short-term. There are other possibilities such as increased science literacy over their lifetimes or going on to become a scientist but that will be at least 10 years away. There are also other less directly measurable possibilities (such as using an idea from an online science chat to create a story or an art piece decades after the fact) but these are in the long term and don’t lend themselves easily to measurement.

One other issue, I’d like to touch on is the scientists themselves having difficulty with the concept of ‘broader implications’. I sometimes ask them something along this line, where is your work going to be used or what are the practical applications. The answers can baffle me as I receive a very stripped down response which doesn’t answer the question adequately for someone (me) who isn’t an expert colleague. As I’m usually interviewing by email, I don’t have the option of asking all of the followup questions (often, more than one would be needed) to extract the information.

I’m hopeful that the situation will change with projects such as Terry at the University of British Columbia, from the About page listing a special course,

ASIC 200 – THAT‘S ARTS AND SCIENCE INTEGRATED COURSE – GLOBAL ISSUES.

What is ASIC200? Full course details can be found [here], but here’s a gander at the general course description:

“Human society confronts a range of challenges that are global in scope. These changes threaten planetary and local ecosystems, the stability and sustainability of human societies, and the health and well being of human individuals and communities. The natural and human worlds are now interacting at the global level to an unprecedented degree. Responding to these global issues will be the greatest challenge facing human society in the 21st century. In this course students will explore selected global issues from the perspective of both the physical and life sciences and the social sciences and humanities. The fundamental philosophy of the course is that global issues cannot be fully understood or addressed without a functional literacy in both the Sciences and the Arts. [emphasis mine] In this course, students will develop the knowledge and the practical skills required to become engaged citizens in the local, national, and international civil society dialogue on global issues.”

I like this approach as it requires that arts students also extend their range; it’s not just scientists doing all the work to expand understanding. Even the OECD (Organization for Economic Cooperation and Development) is getting in on the act with recommendations for more innovative societies. From Key Findings (p. 9) in The OECD Innovation Strategy: Getting a Head Start on Tomorrow,

Formal education is the basis for forming human capital, and policy makers should ensure that education systems help learners to adapt to the changing nature of innovation from the start. This requires curricula and pedagogies that equip students with the capacity to learn and apply new skills throughout their lives. Emphasis needs to be placed on skills such as critical thinking, creativity, communication, user orientation and teamwork, in addition to domain-specific and linguistic skills. [emphasis mine]

The recommendation is inclusive and not aimed at a specific group such as scientists, although the Key Findings and the Executive Summary (which can be found on this page) seem most heavily invested in developing recommendations for business/market/entrepreneurial innovation rather than the sciences or the humanities.

Measuring professional and national scientific achievements; Canadian science policy conferences

I’m going to start with an excellent study about publication bias in science papers and careerism that I stumbled across this morning on physorg.com (from the news item),

Dr [Daniele] Fanelli [University of Edinburgh] analysed over 1300 papers that declared to have tested a hypothesis in all disciplines, from physics to sociology, the principal author of which was based in a U.S. state. Using data from the National Science Foundation, he then verified whether the papers’ conclusions were linked to the states’ productivity, measured by the number of papers published on average by each academic.

Findings show that papers whose authors were based in more “productive” states were more likely to support the tested hypothesis, independent of discipline and funding availability. This suggests that scientists working in more competitive and productive environments are more likely to make their results look “positive”. It remains to be established whether they do this by simply writing the papers differently or by tweaking and selecting their data.

I was happy to find out that Fanelli’s paper has been published by the PLoS [Public Library of Science] ONE , an open access journal. From the paper [numbers in square brackets are citations found at the end of the published paper],

Quantitative studies have repeatedly shown that financial interests can influence the outcome of biomedical research [27], [28] but they appear to have neglected the much more widespread conflict of interest created by scientists’ need to publish. Yet, fears that the professionalization of research might compromise its objectivity and integrity had been expressed already in the 19th century [29]. Since then, the competitiveness and precariousness of scientific careers have increased [30], and evidence that this might encourage misconduct has accumulated. Scientists in focus groups suggested that the need to compete in academia is a threat to scientific integrity [1], and those guilty of scientific misconduct often invoke excessive pressures to produce as a partial justification for their actions [31]. Surveys suggest that competitive research environments decrease the likelihood to follow scientific ideals [32] and increase the likelihood to witness scientific misconduct [33] (but see [34]). However, no direct, quantitative study has verified the connection between pressures to publish and bias in the scientific literature, so the existence and gravity of the problem are still a matter of speculation and debate [35].

Fanelli goes on to describe his research methods and how he came to his conclusion that the pressure to publish may have a significant impact on ‘scientific objectivity’.

This paper provides an interesting counterpoint to a discussion about science metrics or bibliometrics taking place on (the journal) Nature’s website here. It was stimulated by Judith Lane’s recent article titled, Let’s Make Science Metrics More Scientific. The article is open access and comments are invited. From the article [numbers in square brackets refer to citations found at the end of the article],

Measuring and assessing academic performance is now a fact of scientific life. Decisions ranging from tenure to the ranking and funding of universities depend on metrics. Yet current systems of measurement are inadequate. Widely used metrics, from the newly-fashionable Hirsch index to the 50-year-old citation index, are of limited use [1]. Their well-known flaws include favouring older researchers, capturing few aspects of scientists’ jobs and lumping together verified and discredited science. Many funding agencies use these metrics to evaluate institutional performance, compounding the problems [2]. Existing metrics do not capture the full range of activities that support and transmit scientific ideas, which can be as varied as mentoring, blogging or creating industrial prototypes.

The range of comments is quite interesting, I was particularly taken by something Martin Fenner said,

Science metrics are not only important for evaluating scientific output, they are also great discovery tools, and this may indeed be their more important use. Traditional ways of discovering science (e.g. keyword searches in bibliographic databases) are increasingly superseded by non-traditional approaches that use social networking tools for awareness, evaluations and popularity measurements of research findings.

(Fenner’s blog along with more of his comments about science metrics can be found here. If this link doesn’t work, you can get to Fenner’s blog by going to Lane’s Nature article and finding him in the comments section.)

There are a number of issues here: how do we measure science work (citations in other papers?) as well as how do we define the impact of science work (do we use social networks?) which brings the question to: how do we measure the impact when we’re talking about a social network?

Now, I’m going to add timeline as an issue. Over what period of time are we measuring the impact? I ask the question because of the memristor story.  Dr. Leon Chua wrote a paper in 1971 that, apparently, didn’t receive all that much attention at the time but was cited in a 2008 paper which received widespread attention. Meanwhile, Chua had continued to theorize about memristors in a 2003 paper that received so little attention that Chua abandoned plans to write part 2. Since the recent burst of renewed interest in the memristor and his 2003 paper, Chua has decided to follow up with part 2, hopefully some time in 2011. (as per this April 13, 2010 posting) There’s one more piece to the puzzle: an earlier paper by F. Argall. From Blaise Mouttet’s April 5, 2010 comment here on this blog,

In addition HP’s papers have ignored some basic research in TiO2 multi-state resistance switching from the 1960’s which disclose identical results. See F. Argall, “Switching Phenomena in Titanium Oxide thin Films,” Solid State Electronics, 1968.
http://pdf.com.ru/a/ky1300.pdf

[ETA: April 22, 2010 Blaise Mouttet has provided a link to an article  which provides more historical insight into the memristor story. http://knol.google.com/k/memistors-memristors-and-the-rise-of-strong-artificial-intelligence#

How do you measure or even track  all of that? Shy of some science writer taking the time to pursue the story and write a nonfiction book about it.

I’m not counselling that the process be abandoned but since it seems that the people are revisiting the issues, it’s an opportune time to get all the questions on the table.

As for its importance, this process of trying to establish better and new science metrics may seem irrelevant to most people but it has a much larger impact than even the participants appear to realize. Governments measure their scientific progress by touting the number of papers their scientists have produced amongst other measures such as  patents. Measuring the number of published papers has an impact on how governments want to be perceived internationally and within their own borders. Take for example something which has both international and national impact, the recent US National Nanotechnology Initiative (NNI) report to the President’s Council of Science and Technology Advisors (PCAST). The NNI used the number of papers published as a way of measuring the US’s possibly eroding leadership in the field. (China published about 5000 while the US published about 3000.)

I don’t have much more to say other than I hope to see some new metrics.

Canadian science policy conferences

We have two such conferences and both are two years old in 2010. The first one is being held in Gatineau, Québec, May 12 – 14, 2010. Called Public Science  in Canada: Strengthening Science and Policy to Protect Canadians [ed. note: protecting us from what?], the target audience for the conference seems to be government employees. David Suzuki (tv host, scientist, evironmentalist, author, etc.) and Preston Manning (ex-politico) will be co-presenting a keynote address titled: Speaking Science to Power.

The second conference takes place in Montréal, Québec, Oct. 20-22, 2010. It’s being produced by the Canadian Science Policy Centre. Other than a notice on the home page, there’s not much information about their upcoming conference yet.

I did note that Adam Holbrook (aka J. Adam Holbrook) is both speaking at the May conference and is an advisory committee member for the folks who are organizing the October conference. At the May conference, he will be participating in a session titled: Fostering innovation: the role of public S&T. Holbrook is a local (to me) professor as he works at Simon Fraser University, Vancouver, Canada.

That’s all of for today.