Tag Archives: Glyn Moody

Internet Archive backup in Canada?

It’s a good idea whether or not the backup site is in Canada and regardless of who is president of the United States, i.e., having a backup for the world’s digital memory. The Internet Archives has announced that it is raising funds to allow for the creation of a backup site. Here’s more from a Dec. 1, 2016 news item on phys.org,

The Internet Archive, which keeps historical records of Web pages, is creating a new backup center in Canada, citing concerns about surveillance following the US presidential election of Donald Trump.

“On November 9 in America, we woke up to a new administration promising radical change. It was a firm reminder that institutions like ours, built for the long term, need to design for change,” said a blog post from Brewster Kahle, founder and digital librarian at the organization.

“For us, it means keeping our cultural materials safe, private and perpetually accessible. It means preparing for a Web that may face greater restrictions.”

While Trump has announced no new digital policies, his campaign comments have raised concerns his administration would be more active on government surveillance and less sensitive to civil liberties.

Glyn Moody in a Nov. 30, 2016 posting on Techdirt eloquently describes the Internet Archive’s role (Note: Links have been removed),

The Internet Archive is probably the most important site that most people have never heard of, much less used. It is an amazing thing: not just a huge collection of freely-available digitized materials, but a backup copy of much of today’s Web, available through something known as the Wayback Machine. It gets its name from the fact that it lets visitors view snapshots of vast numbers of Web pages as they have changed over the last two decades since the Internet Archive was founded — some 279 billion pages currently. That feature makes it an indispensable — and generally unique — record of pages and information that have since disappeared, sometimes because somebody powerful found them inconvenient.

Even more eloquently, Brewster Kahle explains the initiative in his Nov. 29, 2016 posting on one of the Internet Archive blogs,

The history of libraries is one of loss.  The Library of Alexandria is best known for its disappearance.

Libraries like ours are susceptible to different fault lines:

Earthquakes,

Legal regimes,

Institutional failure.

So this year, we have set a new goal: to create a copy of Internet Archive’s digital collections in another country. We are building the Internet Archive of Canada because, to quote our friends at LOCKSS, “lots of copies keep stuff safe.” This project will cost millions. So this is the one time of the year I will ask you: please make a tax-deductible donation to help make sure the Internet Archive lasts forever. (FAQ on this effort).

Throughout history, libraries have fought against terrible violations of privacy—where people have been rounded up simply for what they read.  At the Internet Archive, we are fighting to protect our readers’ privacy in the digital world.

We can do this because we are independent, thanks to broad support from many of you. The Internet Archive is a non-profit library built on trust. Our mission: to give everyone access to all knowledge, forever. For free. The Internet Archive has only 150 staff but runs one of the top-250 websites in the world. Reader privacy is very important to us, so we don’t accept ads that track your behavior.  We don’t even collect your IP address. But we still need to pay for the increasing costs of servers, staff and rent.

You may not know this, but your support for the Internet Archive makes more than 3 million e-books available for free to millions of Open Library patrons around the world.

Your support has fueled the work of journalists who used our Political TV Ad Archive in their fact-checking of candidates’ claims.

It keeps the Wayback Machine going, saving 300 million Web pages each week, so no one will ever be able to change the past just because there is no digital record of it. The Web needs a memory, the ability to look back.

My two most relevant past posts on the topic of archives and memories are this May 18, 2012 piece about Luciana Duranti’s talk about authenticity and trust regarding digital documents and this March 8, 2012 posting about digital memory, which also features a mention of Brewster Kahle and the Internet Archives.

Copyright and patent protections and human rights

The United Nations (UN) and cultural rights don’t immediately leap to mind when the subjects of copyright and patents are discussed. A Mar. 13, 2015 posting by Tim Cushing on Techdirt and an Oct. 14, 2015 posting by Glyn Moody also on Techdirt explain the connection in the person of Farida Shaheed, the UN Special Rapporteur on cultural rights and the author of two UN reports one on copyright and one on patents.

From the Mar. 13, 2015 posting by Tim Cushing,

… Farida Shaheed, has just delivered a less-than-complimentary report on copyright to the UN’s Human Rights Council. Shaheed’s report actually examines where copyright meshes with arts and science — the two areas it’s supposed to support — and finds it runs contrary to the rosy image of incentivized creation perpetuated by the MPAAs and RIAAs of the world.

Shaheed said a “widely shared concern stems from the tendency for copyright protection to be strengthened with little consideration to human rights issues.” This is illustrated by trade negotiations conducted in secrecy, and with the participation of corporate entities, she said.

She stressed the fact that one of the key points of her report is that intellectual property rights are not human rights. “This equation is false and misleading,” she said.

The last statement fires shots over the bows of “moral rights” purveyors, as well as those who view infringement as a moral issue, rather than just a legal one.

Shaheed also points out that the protections being installed around the world at the behest of incumbent industries are not necessarily reflective of creators’ desires. …

Glyn Moody’s Oct. 14, 2015 posting features Shaheed’s latest report on patents,

… As the summary to her report puts it:

There is no human right to patent protection. The right to protection of moral and material interests cannot be used to defend patent laws that inadequately respect the right to participate in cultural life, to enjoy the benefits of scientific progress and its applications, to scientific freedoms and the right to food and health and the rights of indigenous peoples and local communities.

Patents, when properly structured, may expand the options and well-being of all people by making new possibilities available. Yet, they also give patent-holders the power to deny access to others, thereby limiting or denying the public’s right of participation to science and culture. The human rights perspective demands that patents do not extend so far as to interfere with individuals’ dignity and well-being. Where patent rights and human rights are in conflict, human rights must prevail.

The report touches on many issues previously discussed here on Techdirt. For example, how pharmaceutical patents limit access to medicines by those unable to afford the high prices monopolies allow — a particularly hot topic in the light of TPP’s rules on data exclusivity for biologics. The impact of patents on seed independence is considered, and there is a warning about corporate sovereignty chapters in trade agreements, and the chilling effects they can have on the regulatory function of states and their ability to legislate in the public interest — for example, with patent laws.

I have two Canadian examples for data exclusivity and corporate sovereignty issues, both from Techdirt. There’s an Oct. 19, 2015 posting by Glyn Moody featuring a recent Health Canada move to threaten a researcher into suppressing information from human clinical trials,

… one of the final sticking points of the TPP negotiations [Trans Pacific Partnership] was the issue of data exclusivity for the class of drugs known as biologics. We’ve pointed out that the very idea of giving any monopoly on what amounts to facts is fundamentally anti-science, but that’s a rather abstract way of looking at it. A recent case in Canada makes plain what data exclusivity means in practice. As reported by CBC [Canadian Broadcasting Corporation] News, it concerns unpublished clinical trial data about a popular morning sickness drug:

Dr. Navindra Persaud has been fighting for four years to get access to thousands of pages of drug industry documents being held by Health Canada.

He finally received the material a few weeks ago, but now he’s being prevented from revealing what he has discovered.

That’s because Health Canada required him to sign a confidentiality agreement, and has threatened him with legal action if he breaks it.

The clinical trials data is so secret that he’s been told that he must destroy the documents once he’s read them, and notify Health Canada in writing that he has done so….

For those who aren’t familiar with it, the Trans Pacific Partnership is a proposed trade agreement including 12 countries (Australia, Brunei Darussalam, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore, United States, and Vietnam) from the Pacific Rim. If all the countries sign on (it looks as if they will; Canada’s new Prime Minister as of Oct. 19, 2015 seems to be in favour of the agreement although he has yet to make a definitive statement), the TPP will represent a trading block that is almost double the size of the European Union.

An Oct. 8, 2015 posting by Mike Masnick provides a description of corporate sovereignty and of the Eli Lilly suit against the Canadian government.

We’ve pointed out a few times in the past that while everyone refers to the Trans Pacific Partnership (TPP) agreement as a “free trade” agreement, the reality is that there’s very little in there that’s actually about free trade. If it were truly a free trade agreement, then there would be plenty of reasons to support it. But the details show it’s not, and yet, time and time again, we see people supporting the TPP because “well, free trade is good.” …
… it’s that “harmonizing regulatory regimes” thing where the real nastiness lies, and where you quickly discover that most of the key factors in the TPP are not at all about free trade, but the opposite. It’s about as protectionist as can be. That’s mainly because of the really nasty corprorate sovereignty clauses in the agreement (which are officially called “investor state dispute settlement” or ISDS in an attempt to make it sound so boring you’ll stop paying attention). Those clauses basically allow large incumbents to force the laws of countries to change to their will. Companies who feel that some country’s regulation somehow takes away “expected profits” can convene a tribunal, and force a country to change its laws. Yes, technically a tribunal can only issue monetary sanctions against a country, but countries who wish to avoid such monetary payments will change their laws.

Remember how Eli Lilly is demanding $500 million from Canada after Canada rejected some Eli Lilly patents, noting that the new compound didn’t actually do anything new and useful? Eli Lilly claims that using such a standard to reject patents unfairly attacks its expected future profits, and thus it can demand $500 million from Canadian taxpayers. Now, imagine that on all sorts of other systems.

Cultural rights, human rights, corporate rights. It would seem that corporate rights are going to run counter to human rights, if nothing else.

Canadian scientists in a national protest on May 19, 2015 and some thoughts on a more nuanced discussion about ‘science muzzles’

For anyone unfamiliar with Canada’s science muzzle, government scientists are not allowed to speak directly to the media and all requests must be handled by the communications department in the ministry. For one of the odder consequences of that policy, there’s my Sept. 16, 2010 posting about a scientist who wasn’t allowed to talk to media about his research on a 13,000 year old flood that took place in the Canadian North. Adding insult to injury, his international colleagues were giving out all kinds of interviews.

Here’s a more recent incident (h/t Speaking Up For Canadian Science, May 20, 2015) recounted in a May 19, 2015 news item by  Nicole Mortillaro for CTV (Canadian television) news online ,

“Unlike Canadian scientists, I don’t have to ask permission to talk to you.”

That was one of the first things National Oceanic and Atmospheric Administration (NOAA) scientist Pieter Tans said when I called to reach him for comment about rising carbon dioxide levels reaching historic levels.

The topic itself was controversial: climate change is a hot-button topic for many. But getting in touch with NOAA was easy. In total, there were five email exchanges, all providing information about the topic and the arrangement of the interview.

Compare that to trying to get response from a Canadian federal department.

While I’ve had many frustrating dealings with various federal agencies, my most recent experience came as I was working on a story about ways Canadians could protect themselves as severe weather season approached. I wanted to mention the new federal national emergency warning system, Alert Ready. I reached out to Environment Canada for more information.

You’d think the federal government would want to let Canadians know about a new national emergency warning system and they do, in their fashion. For the whole story, there’s Mortillaro’s piece (which has an embedded video and more) but for the fast version, Mortillaro contacted the communications people a day before her Friday deadline asking for a spokesperson. The communications team missed the deadline although they did find a spokesperson who would be available on the Monday. Strangely or not, he proved to be hesitant to talk about the new system.

Getting back to the science muzzle protest of 2015 and the muzzle itself, there’s a May 17, 2015 article by Ivan Semeniuk for the Globe and Mail providing more detail about the muzzle and the then upcoming protest organized by the Professional Institute of the Public Service of Canada (PIPSC) currently in contract negotiations with the federal government. (Echoing what I said in my Dec. 4, 2014 posting about the contract negotiations, the union is bargaining for the right to present science information which is unprecedented in Canada (and, I suspect, internationally). Back to Semeniuk’s article,

With contract negotiations set to resume this week, there will also be a series of demonstrations for the Ottawa area on Tuesday to focus attention on the issue.

If successful, the effort could mark a precedent-setting turn in what the government’s critics portray as a struggle between intellectual independence and political prerogative.

“Our science members said to us: What’s more important than anything else is our ability to do our jobs as professionals,” said Peter Bleyer, an adviser with the Professional Institute of the Public Service of Canada, whose membership includes some 15,000 scientists and engineers.

Government scientists have always been vulnerable to those who hold the reins of power, but tensions have grown under the Conservatives. After the Tories enacted a wave of research program and facility cancellations in 2012, stories began to emerge of researchers who were blocked from responding to media requests about their work.

The onerous communications protocols apply even for stories about scientific advancements that are likely to reflect positively on the federal government. Last month [April 2015], after it was announced that Canada would become a partner in the Thirty Meter Telescope, The Globe and Mail had to appeal to the Prime Minister’s Office to facilitate an interview with the National Research Council astronomer leading the development of the telescope’s sophisticated adaptive-optics system.

Federal Information Commissioner Suzanne Legault is currently conducting an investigation into complaints that scientists have been muzzled by the Conservative government.

As Semeniuk notes at the end of his article in a quote from the US-based Union of Concerned Scientists’ representative, the problem is not new and not unique to Canada. For a ‘not unique’ example, the UK government seems to be interested in taking a similar approach to ‘muzzling’ scientists, according to an April 1, 2015 post by Glyn Moody for Techdirt (Note: Links have been removed),

Techdirt has been following for a while Canada’s moves to stop scientists from speaking out about areas where the facts of the situation don’t sit well with the Canadian government’s dogma-based policies. Sadly, it looks like the UK is taking the same route. It concerns a new code for the country’s civil servants, which will also apply to thousands of publicly-funded scientists. As the Guardian reports:

Under the new code, scientists and engineers employed at government expense must get ministerial approval before they can talk to the media about any of their research, whether it involves GM crops, flu vaccines, the impact of pesticides on bees, or the famously obscure Higgs boson.

The fear — quite naturally — is that ministers could take days before replying to requests, by which time news outlets will probably have lost interest. As a result of this change, science organizations have sent a letter to the UK government, expressing their “deep concern” about the code. …

As for ‘not new’, there’s always a tension between employer and employee about what constitutes free speech. Does an employee get fired for making gross, sexist comments in their free time at a soccer game? The answer in Ontario, Canada is yes according to a May 14, 2015 article by Samantha Leal for Marie Claire magazine. Presumably there will be a law suit and we will find out if the firing is legally acceptable. Or more cynically, this may prove to be a public relations ploy designed to spin the story in the employer’s favour while the employee takes some time off and returns unobtrusively at a later date.

I have a couple of final comments about free speech and employers’ and employees’ rights and responsibilities.First, up until the muzzles were applied, the Canadian government and its scientists seemed to have had a kind of unspoken agreement as to what constituted fair discussion of scientific research in the media. I vaguely recall a few kerfuffles over the years but nothing major. (If someone can recall an incident where a scientist working for the Canadian government seriously embarrassed it, please let me know in the comments.)  So, this relatively new enthusiasm for choking off  media coverage of Canadian science research seems misplaced at best. Unfortunately, it has exacerbated standard tensions about what employees can and can’t say to new heights. Attempting to entrench the right to share science research in a bureaucratic process (a union contract) seems weirdly similar to the Harper government’s approach, which like the union’s proposition added a bureaucratic layer.

As for my second thought, I’m wondering how many people who cheered that soccer fan’s firing for making comments (albeit sexist comments) in his free time are protesting for free speech for Canadian government scientists.

It comes down to* matters of principle. Which ones do we want to follow and when do we apply them? Do principles apply only for those people and ideas we find acceptable?

I just wish there was a little more nuance brought to the ‘science muzzle in Canada’ discussion so we might veer away from heightened adversarial relationships between the government and its scientists.

* The phrase was originally published as “to a matters of principle …” and was corrected on May 22, 2015.

CRISPR genome editing tools and human genetic engineering issues

This post is going to feature a human genetic engineering roundup of sorts.

First, the field of human genetic engineering encompasses more than the human genome as this paper (open access until June 5, 2015) notes in the context of a discussion about a specific CRISPR gene editing tool,

CRISPR-Cas9 Based Genome Engineering: Opportunities in Agri-Food-Nutrition and Healthcare by Rajendran Subin Raj Cheri Kunnumal, Yau Yuan-Yeu, Pandey Dinesh, and Kumar Anil. OMICS: A Journal of Integrative Biology. May 2015, 19(5): 261-275. doi:10.1089/omi.2015.0023 Published Online Ahead of Print: April 14, 2015

Here’s more about the paper from a May 7, 2015 Mary Ann Liebert publisher news release on EurekAlert,

Researchers have customized and refined a technique derived from the immune system of bacteria to develop the CRISPR-Cas9 genome engineering system, which enables targeted modifications to the genes of virtually any organism. The discovery and development of CRISPR-Cas9 technology, its wide range of potential applications in the agriculture/food industry and in modern medicine, and emerging regulatory issues are explored in a Review article published in OMICS: A Journal of Integrative Biology, …

“CRISPR-Cas9 Based Genome Engineering: Opportunities in Agri-Food-Nutrition and Healthcare” provides a detailed description of the CRISPR system and its applications in post-genomics biology. Subin Raj, Cheri Kunnumal Rajendran, Dinish Pandey, and Anil Kumar, G.B. Pant University of Agriculture and Technology (Uttarakhand, India) and Yuan-Yeu Yau, Northeastern State University (Broken Arrow, OK) describe the advantages of the RNA-guided Cas9 endonuclease-based technology, including the activity, specificity, and target range of the enzyme. The authors discuss the rapidly expanding uses of the CRISPR system in both basic biological research and product development, such as for crop improvement and the discovery of novel therapeutic agents. The regulatory implications of applying CRISPR-based genome editing to agricultural products is an evolving issue awaiting guidance by international regulatory agencies.

“CRISPR-Cas9 technology has triggered a revolution in genome engineering within living systems,” says OMICS Editor-in-Chief Vural Özdemir, MD, PhD, DABCP. “This article explains the varied applications and potentials of this technology from agriculture to nutrition to medicine.

Intellectual property (patents)

The CRISPR technology has spawned a number of intellectual property (patent) issues as a Dec. 21,2014 post by Glyn Moody on Techdirt stated,

Although not many outside the world of the biological sciences have heard of it yet, the CRISPR gene editing technique may turn out to be one of the most important discoveries of recent years — if patent battles don’t ruin it. Technology Review describes it as:

… an invention that may be the most important new genetic engineering technique since the beginning of the biotechnology age in the 1970s. The CRISPR system, dubbed a “search and replace function” for DNA, lets scientists easily disable genes or change their function by replacing DNA letters. During the last few months, scientists have shown that it’s possible to use CRISPR to rid mice of muscular dystrophy, cure them of a rare liver disease, make human cells immune to HIV, and genetically modify monkeys.

Unfortunately, rivalry between scientists claiming the credit for key parts of CRISPR threatens to spill over into patent litigation:

[A researcher at the MIT-Harvard Broad Institute, Feng] Zhang cofounded Editas Medicine, and this week the startup announced that it had licensed his patent from the Broad Institute. But Editas doesn’t have CRISPR sewn up. That’s because [Jennifer] Doudna, a structural biologist at the University of California, Berkeley, was a cofounder of Editas, too. And since Zhang’s patent came out, she’s broken off with the company, and her intellectual property — in the form of her own pending patent — has been licensed to Intellia, a competing startup unveiled only last month. Making matters still more complicated, [another CRISPR researcher, Emmanuelle] Charpentier sold her own rights in the same patent application to CRISPR Therapeutics.

Things are moving quickly on the patent front, not least because the Broad Institute paid extra to speed up its application, conscious of the high stakes at play here:

Along with the patent came more than 1,000 pages of documents. According to Zhang, Doudna’s predictions in her own earlier patent application that her discovery would work in humans was “mere conjecture” and that, instead, he was the first to show it, in a separate and “surprising” act of invention.

The patent documents have caused consternation. The scientific literature shows that several scientists managed to get CRISPR to work in human cells. In fact, its easy reproducibility in different organisms is the technology’s most exciting hallmark. That would suggest that, in patent terms, it was “obvious” that CRISPR would work in human cells, and that Zhang’s invention might not be worthy of its own patent.

….

Ethical and moral issues

The CRISPR technology has reignited a discussion about ethical and moral issues of human genetic engineering some of which is reviewed in an April 7, 2015 posting about a moratorium by Sheila Jasanoff, J. Benjamin Hurlbut and Krishanu Saha for the Guardian science blogs (Note: A link has been removed),

On April 3, 2015, a group of prominent biologists and ethicists writing in Science called for a moratorium on germline gene engineering; modifications to the human genome that will be passed on to future generations. The moratorium would apply to a technology called CRISPR/Cas9, which enables the removal of undesirable genes, insertion of desirable ones, and the broad recoding of nearly any DNA sequence.

Such modifications could affect every cell in an adult human being, including germ cells, and therefore be passed down through the generations. Many organisms across the range of biological complexity have already been edited in this way to generate designer bacteria, plants and primates. There is little reason to believe the same could not be done with human eggs, sperm and embryos. Now that the technology to engineer human germlines is here, the advocates for a moratorium declared, it is time to chart a prudent path forward. They recommend four actions: a hold on clinical applications; creation of expert forums; transparent research; and a globally representative group to recommend policy approaches.

The authors go on to review precedents and reasons for the moratorium while suggesting we need better ways for citizens to engage with and debate these issues,

An effective moratorium must be grounded in the principle that the power to modify the human genome demands serious engagement not only from scientists and ethicists but from all citizens. We need a more complex architecture for public deliberation, built on the recognition that we, as citizens, have a duty to participate in shaping our biotechnological futures, just as governments have a duty to empower us to participate in that process. Decisions such as whether or not to edit human genes should not be left to elite and invisible experts, whether in universities, ad hoc commissions, or parliamentary advisory committees. Nor should public deliberation be temporally limited by the span of a moratorium or narrowed to topics that experts deem reasonable to debate.

I recommend reading the post in its entirety as there are nuances that are best appreciated in the entirety of the piece.

Shortly after this essay was published, Chinese scientists announced they had genetically modified (nonviable) human embryos. From an April 22, 2015 article by David Cyranoski and Sara Reardon in Nature where the research and some of the ethical issues discussed,

In a world first, Chinese scientists have reported editing the genomes of human embryos. The results are published1 in the online journal Protein & Cell and confirm widespread rumours that such experiments had been conducted — rumours that sparked a high-profile debate last month2, 3 about the ethical implications of such work.

In the paper, researchers led by Junjiu Huang, a gene-function researcher at Sun Yat-sen University in Guangzhou, tried to head off such concerns by using ‘non-viable’ embryos, which cannot result in a live birth, that were obtained from local fertility clinics. The team attempted to modify the gene responsible for β-thalassaemia, a potentially fatal blood disorder, using a gene-editing technique known as CRISPR/Cas9. The researchers say that their results reveal serious obstacles to using the method in medical applications.

“I believe this is the first report of CRISPR/Cas9 applied to human pre-implantation embryos and as such the study is a landmark, as well as a cautionary tale,” says George Daley, a stem-cell biologist at Harvard Medical School in Boston, Massachusetts. “Their study should be a stern warning to any practitioner who thinks the technology is ready for testing to eradicate disease genes.”

….

Huang says that the paper was rejected by Nature and Science, in part because of ethical objections; both journals declined to comment on the claim. (Nature’s news team is editorially independent of its research editorial team.)

He adds that critics of the paper have noted that the low efficiencies and high number of off-target mutations could be specific to the abnormal embryos used in the study. Huang acknowledges the critique, but because there are no examples of gene editing in normal embryos he says that there is no way to know if the technique operates differently in them.

Still, he maintains that the embryos allow for a more meaningful model — and one closer to a normal human embryo — than an animal model or one using adult human cells. “We wanted to show our data to the world so people know what really happened with this model, rather than just talking about what would happen without data,” he says.

This, too, is a good and thoughtful read.

There was an official response in the US to the publication of this research, from an April 29, 2015 post by David Bruggeman on his Pasco Phronesis blog (Note: Links have been removed),

In light of Chinese researchers reporting their efforts to edit the genes of ‘non-viable’ human embryos, the National Institutes of Health (NIH) Director Francis Collins issued a statement (H/T Carl Zimmer).

“NIH will not fund any use of gene-editing technologies in human embryos. The concept of altering the human germline in embryos for clinical purposes has been debated over many years from many different perspectives, and has been viewed almost universally as a line that should not be crossed. Advances in technology have given us an elegant new way of carrying out genome editing, but the strong arguments against engaging in this activity remain. These include the serious and unquantifiable safety issues, ethical issues presented by altering the germline in a way that affects the next generation without their consent, and a current lack of compelling medical applications justifying the use of CRISPR/Cas9 in embryos.” …

More than CRISPR

As well, following on the April 22, 2015 Nature article about the controversial research, the Guardian published an April 26, 2015 post by Filippa Lentzos, Koos van der Bruggen and Kathryn Nixdorff which makes the case that CRISPR techniques do not comprise the only worrisome genetic engineering technology,

The genome-editing technique CRISPR-Cas9 is the latest in a series of technologies to hit the headlines. This week Chinese scientists used the technology to genetically modify human embryos – the news coming less than a month after a prominent group of scientists had called for a moratorium on the technology. The use of ‘gene drives’ to alter the genetic composition of whole populations of insects and other life forms has also raised significant concern.

But the technology posing the greatest, most immediate threat to humanity comes from ‘gain-of-function’ (GOF) experiments. This technology adds new properties to biological agents such as viruses, allowing them to jump to new species or making them more transmissible. While these are not new concepts, there is grave concern about a subset of experiments on influenza and SARS viruses which could metamorphose them into pandemic pathogens with catastrophic potential.

In October 2014 the US government stepped in, imposing a federal funding pause on the most dangerous GOF experiments and announcing a year-long deliberative process. Yet, this process has not been without its teething-problems. Foremost is the de facto lack of transparency and open discussion. Genuine engagement is essential in the GOF debate where the stakes for public health and safety are unusually high, and the benefits seem marginal at best, or non-existent at worst. …

Particularly worrisome about the GOF process is that it is exceedingly US-centric and lacks engagement with the international community. Microbes know no borders. The rest of the world has a huge stake in the regulation and oversight of GOF experiments.

Canadian perspective?

I became somewhat curious about the Canadian perspective on all this genome engineering discussion and found a focus on agricultural issues in the single Canadian blog piece I found. It’s an April 30, 2015 posting by Lisa Willemse on Genome Alberta’s Livestock blog has a twist in the final paragraph,

The spectre of undesirable inherited traits as a result of DNA disruption via genome editing in human germline has placed the technique – and the ethical debate – on the front page of newspapers around the globe. Calls for a moratorium on further research until both the ethical implications can be worked out and the procedure better refined and understood, will undoubtedly temper research activities in many labs for months and years to come.

On the surface, it’s hard to see how any of this will advance similar research in livestock or crops – at least initially.

Groups already wary of so-called “frankenfoods” may step up efforts to prevent genome-edited food products from hitting supermarket shelves. In the EU, where a stringent ban on genetically-modified (GM) foods is already in place, there are concerns that genome-edited foods will be captured under this rubric, holding back many perceived benefits. This includes pork and beef from animals with disease resistance, lower methane emissions and improved feed-to-food ratios, milk from higher-yield or hornless cattle, as well as food and feed crops with better, higher quality yields or weed resistance.

Still, at the heart of the human germline editing is the notion of a permanent genetic change that can be passed on to offspring, leading to concerns of designer babies and other advantages afforded only to those who can pay. This is far less of a concern in genome-editing involving crops and livestock, where the overriding aim is to increase food supply for the world’s population at lower cost. Given this, and that research for human medical benefits has always relied on safety testing and data accumulation through experimentation in non-human animals, it’s more likely that any moratorium in human studies will place increased pressure to demonstrate long-term safety of such techniques on those who are conducting the work in other species.

Willemse’s last paragraph offers a strong contrast to the Guardian and Nature pieces.

Finally, there’s a May 8, 2015 posting (which seems to be an automat4d summary of an article in the New Scientist) on a blog maintained by the Canadian Raelian Movement. These are people who believe that alien scientists landed on earth and created all the forms of life on this planet. You can find  more on their About page. In case it needs to be said, I do not subscribe to this belief system but I do find it interesting in and of itself and because one of the few Canadian sites that I could find offering an opinion on the matter even if it is in the form of a borrowed piece from the New Scientist.

Cellulose nanocrystals (CNC), also known as nanocrystalline cellulose (NCC), and toxicity; some Celluforce news; anti-petroleum extremists

The February 2015 issue of Industrial Biotechnology is hosting a special in depth research section on the topic of cellulose nanotechnology. A Feb. 19, 2015 news item on Phys.org features a specific article in the special section (Note: A link has been removed),

Novel nanomaterials derived from cellulose have many promising industrial applications, are biobased and biodegradable, and can be produced at relatively low cost. Their potential toxicity—whether ingested, inhaled, on contact with the skin, or on exposure to cells within the body—is a topic of intense discussion, and the latest evidence and insights on cellulose nanocrystal toxicity are presented in a Review article in Industrial Biotechnology.

Maren Roman, PhD, Virginia Tech, Blacksburg, VA, describes the preparation of cellulose nanocrystals (CNCs) and highlights the key factors that are an essential part of studies to assess the potential adverse health effects of CNCs by various types of exposure. In the article “Toxicity of Cellulose Nanocrystals: A Review” , Dr. Roman discusses the current literature on the pulmonary, oral, dermal, and cytotoxicity of CNCs, provides an in-depth view on their effects on human health, and suggests areas for future research.

There has been much Canadian investment both federal and provincial in cellulose nanocrystals (CNC). There’s also been a fair degree of confusion regarding the name. In Canada, which was a research leader initially, it was called nanocrystalline cellulose (NCC) but over time a new term was coined cellulose nanocrystals (CNC). The new name was more in keeping with the naming conventions for other nanoscale cellulose materials such as  cellulose nanofibrils, etc. Hopefully, this confusion will resolve itself now that Celluforce, a Canadian company, has trademarked NCC. (More about Celluforce later in this post.)

Getting back to toxicity and CNC, here’s a link to and a citation for Maron’s research paper,

Toxicity of Cellulose Nanocrystals: A Review by Roman Maren. Industrial Biotechnology. February 2015, 11(1): 25-33. doi:10.1089/ind.2014.0024.

The article is open access at this time. For anyone who doesn’t have the time to read it, here’s the conclusion,

Current studies of the oral and dermal toxicity of CNCs have shown a lack of adverse health effects. The available studies, however, are still very limited in number (two oral toxicity studies and three dermal toxicity studies) and in the variety of tested CNC materials (CelluForce’s NCC). Additional oral and dermal toxicity studies are needed to support the general conclusion that CNCs are nontoxic upon ingestion or contact with the skin. Studies of pulmonary and cytotoxicity, on the other hand, have yielded discordant results. The questions of whether CNCs have adverse health effects on inhalation and whether they elicit inflammatory or oxidative stress responses at the cellular level therefore warrant further investigation. The toxicity of CNCs will depend strongly on their physicochemical properties—in particular, surface chemistry, including particle charge, and degree of aggregation, which determines particle shape and dimensions. Therefore, these properties—which in turn depend strongly on the cellulose source, CNC preparation procedure, and post-processing or sample preparation methods, such as lyophilization, aerosolization, sonication, or sterilization—need to be carefully measured in the final samples.

Another factor that might affect the outcomes of toxicity studies are sample contaminants, such as endotoxins or toxic chemical impurities. Samples for exposure tests should therefore be carefully analyzed for such contaminants prior to testing. Ideally, because detection of toxic chemical contaminants may be difficult, control experiments should be carried out with suitable blanks from which the CNCs have been removed, for example by membrane filtration. Moreover, especially in cytotoxicity assessments, the effect of CNCs on pH and their aggregation in the cell culture medium need to be monitored. Only by careful particle characterization and exclusion of interfering factors will we be able to develop a detailed understanding of the potential adverse health effects of CNCs.

If I understand this rightly, CNC seems safe (more or less) when ingested orally (food/drink) or applied to the skin (dermal application) but inhalation seems problematic and there are indications that this could lead to inflammation of lung cells. Other conclusions suggest both the source for the cellulose and CNC preparation may affect its toxicity. I encourage you to read the whole research paper as this author provides good explanations of the terms and summaries of previous research, as well as, some very well considered research.

Here’s more about Industrial Biotechnology’s special research section in the February 2015 issue, from a Feb. 19, 2015 Mary Ann Liebert publishers press release (also on EurekAlert*),

The article is part of an IB IN DEPTH special research section entitled “Cellulose Nanotechnology: Fundamentals and Applications,” led by Guest Editors Jose Moran-Mirabal, PhD and Emily Cranston, PhD, McMaster University, Hamilton, Canada. In addition to the Review article by Dr. Roman, the issue includes Reviews by M. Rose, M. Babi, and J. Moran-Mirabal (“The Study of Cellulose Structure and Depolymerization Through Single-Molecule Methods”) and by X.F. Zhao and W.T. Winter (“Cellulose/cellulose-based nanospheres: Perspectives and prospective”); Original Research articles by A. Rivkin, T. Abitbol, Y. Nevo, et al. (“Bionanocomposite films from resilin-CBD bound to cellulose nanocrystals), and P. Criado, C. Fraschini, S. Salmieri, et al. (“Evaluation of antioxidant cellulose nanocrystals and applications in gellan gum films”); and the Overview article “Cellulose Nanotechnology on the Rise,” by Drs. Moran-Mirabal and Cranston.

Meanwhile Celluforce announces a $4M ‘contribution’ from Sustainable Development Technology Canada (SDTC), from a Feb. 16, 2015 Celluforce news release,

CelluForce welcomes the announcement by Sustainable Development Technology Canada (SDTC) of a contribution of $4.0 million to optimize the extraction process of Nanocrystaline Cellulose (NCC) from dry wood pulp and develop applications for its use in the oil and gas sector. The announcement was made in Quebec City today [Feb. 16, 2015] by the Honourable Greg Rickford, Minister of Natural Resources and Minister for the Federal Economic Development Initiative for Northern Ontario.

NCC is a fundamental building block of trees that can be extracted from the forest biomass and has unique properties that offer a wide range of potential applications. Measured in units as small as nanometres, these tiny structures have strength properties comparable to steel and will have uses in a variety of industrial sectors. In particular, NCC is touted as having the potential to significantly advance the oil and gas industry.

Our Government is positioning Canada as a global leader in the clean technology sector by supporting innovative projects aimed at growing our economy while contributing to a cleaner environment,” said the Honourable Greg Rickford, Canada’s Minister of Natural Resources. [emphasis mine] “By developing our resources responsibly, exploring next-generation transportation and advancing clean energy technology, the projects announced today will create jobs and improve innovation opportunities in Quebec and across Canada.”

“World-class research led to the development of this ground breaking extraction process and placed Canada at the leading edge of NCC research”, stated René Goguen, Acting President of CelluForce Inc. “This announcement by SDTC sets the stage for the pre-commercial development of applications that will not only support Canada’s forest sector but also the oil and gas sector, both of which are important drivers of the Canadian economy.”

This project will further improve and optimize the process developed by CelluForce to extract nanocrystalline cellulose (CelluForce NCC™) from dry wood pulp. In addition to improving the extraction process, this project will investigate additional applications for the oil-and-gas industry such as cementing using this renewable forestry resource.

There’s very little information in this news release other than the fact that CelluForce’s $4M doesn’t need to be repaid seeing it’s described as a ‘contribution’ rather than an investment. The difference between a contribution and a grant, which is what these funds used to be called, somewhat mystifies me unless this is a translation issue.

As for the news release content, it is remarkably scant. This $4M will be spent on improving the extraction process and on applications for the oil and gas industry. Neither the improvements nor the possible applications are described. Hopefully, the government has some means of establishing whether or not those funds (sorry, the contribution) were used for the purposes described.

I am glad to see this in this news release, “Our Government is positioning Canada as a global leader in the clean technology sector …” although I’m not sure how it fits with recent attempts to brand environmentalists as part of an ‘anti-petroleum’ movement as described in a Feb. 19, 2015 post by Glyn Moody for Techdirt (Note: A link has been removed),

As Techdirt has been warning for some time, one of the dangers with the flood of “anti-terrorist” laws and powers is that they are easily redirected against other groups for very different purposes. A story in the Globe and Mail provides another chilling reminder of how that works:

The RCMP [Royal Canadian Mounted Police] has labelled the “anti-petroleum” movement as a growing and violent threat to Canada’s security, raising fears among environmentalists that they face increased surveillance, and possibly worse, under the Harper government’s new terrorism legislation.

As the Globe and Mail article makes clear, environmentalists are now being considered as part of an “anti-petroleum” movement. That’s not just some irrelevant rebranding: it means that new legislation supposedly targeting “terrorism” can be applied.

It seems logically incoherent to me that the government wants clean tech while condemning environmentalists. Whether or not you buy climate change science (for the record, I do), you have to admit that we are running out of petroleum. At heart, both the government and the environmentalists have to agree that we need new sources for fuel. It doesn’t make any sense to spend valuable money, time, and resources on pursuing environmentalists.

This business about the ‘anti-petroleum’ movement reminds me of a copyright kerfuffle including James Moore, currently the Minister of Industry, and writer Cory Doctorow. Moore, Minister of Canadian Heritage at the time, at some sort of public event, labeled Doctorow as a ‘radical extremist’ regarding his (Doctorow’s) views on copyright. The comments achieved notoriety when it appeared that Moore and the organizers denied the comments ever took place. The organizers seemed to have edited the offending video and Moore made public denials. You can read more about the incident in my June 25, 2010 post. Here’s an excerpt from the post which may explain why I feel there is a similarity,

… By simultaneously linking individuals who use violence to achieve their ends (the usual application for the term ‘radical extremists’) to individuals who are debating, discussing, and writing commentaries critical of your political aims you render the term into a joke and you minimize the violence associated with it.

Although with ‘anti-petroleum’, it seems they could decide any dissension is a form of violence. It should be noted that in Canada the Ministry of Industry, is tightly coupled with the Ministry of Natural Resources since the Canadian economy has been and continues to be largely resource-based.

For anyone interested in CelluForce and NCC/CNC, here’s a sampling of my previous posts on the topic,

CelluForce (nanocrystalline cellulose) plant opens (Dec. 15, 2011)

Double honours for NCC (ArboraNano and CelluForce recognized) (May 25, 2012)

You say nanocrystalline cellulose, I say cellulose nanocrystals; CelluForce at Japan conference and at UK conference (Oct. 15, 2012)

Designing nanocellulose (?) products in Finland; update on Canada’s CelluForce (Oct. 3, 2013) Note: CelluForce stopped producing NCC due to a growing stockpile.

There’s a lot more about CNC on this blog* should you care to search. One final note, I gather there’s a new interim boss at CelluForce, René Goguen replacing Jean Moreau.

* EurekAlert link added Feb. 20, 2015.

* ‘on the CNC blog’ changed to ‘about CNC on this blog’ on March 4, 2015.

CRISPR gene editing technique and patents

I have two items about the CRISPR gene editing technique. The first concerns a new use for the CRISPR technique developed by researchers at Johns Hopkins University School of Medicine described in a Jan. 5, 2015 Johns Hopkins University news release on EurekAlert,

A powerful “genome editing” technology known as CRISPR has been used by researchers since 2012 to trim, disrupt, replace or add to sequences of an organism’s DNA. Now, scientists at Johns Hopkins Medicine have shown that the system also precisely and efficiently alters human stem cells.

“Stem cell technology is quickly advancing, and we think that the days when we can use iPSCs [human-induced pluripotent stem cells] for human therapy aren’t that far away,” says Zhaohui Ye, Ph.D., an instructor of medicine at the Johns Hopkins University School of Medicine. “This is one of the first studies to detail the use of CRISPR in human iPSCs, showcasing its potential in these cells.”

CRISPR originated from a microbial immune system that contains DNA segments known as clustered regularly interspaced short palindromic repeats. The engineered editing system makes use of an enzyme that nicks together DNA with a piece of small RNA that guides the tool to where researchers want to introduce cuts or other changes in the genome.

Previous research has shown that CRISPR can generate genomic changes or mutations through these interventions far more efficiently than other gene editing techniques, such as TALEN, short for transcription activator-like effector nuclease.

Despite CRISPR’s advantages, a recent study suggested that it might also produce a large number of “off-target” effects in human cancer cell lines, specifically modification of genes that researchers didn’t mean to change.

To see if this unwanted effect occurred in other human cell types, Ye; Linzhao Cheng, Ph.D., a professor of medicine and oncology in the Johns Hopkins University School of Medicine; and their colleagues pitted CRISPR against TALEN in human iPSCs, adult cells reprogrammed to act like embryonic stem cells. Human iPSCs have already shown enormous promise for treating and studying disease.

The researchers compared the ability of both genome editing systems to either cut out pieces of known genes in iPSCs or cut out a piece of these genes and replace it with another. As model genes, the researchers used JAK2, a gene that when mutated causes a bone marrow disorder known as polycythemia vera; SERPINA1, a gene that when mutated causes alpha1-antitrypsin deficiency, an inherited disorder that may cause lung and liver disease; and AAVS1, a gene that’s been recently discovered to be a “safe harbor” in the human genome for inserting foreign genes.

Their comparison found that when simply cutting out portions of genes, the CRISPR system was significantly more efficient than TALEN in all three gene systems, inducing up to 100 times more cuts. However, when using these genome editing tools for replacing portions of the genes, such as the disease-causing mutations in JAK2 and SERPINA1 genes, CRISPR and TALEN showed about the same efficiency in patient-derived iPSCs, the researchers report.

Contrary to results of the human cancer cell line study, both CRISPR and TALEN had the same targeting specificity in human iPSCs, hitting only the genes they were designed to affect, the team says. The researchers also found that the CRISPR system has an advantage over TALEN: It can be designed to target only the mutation-containing gene without affecting the healthy gene in patients, where only one copy of a gene is affected.

The findings, together with a related study that was published earlier in a leading journal of stem cell research (Cell Stem Cell), offer reassurance that CRISPR will be a useful tool for editing the genes of human iPSCs with little risk of off-target effects, say Ye and Cheng.

“CRISPR-mediated genome editing opens the door to many genetic applications in biologically relevant cells that can lead to better understanding of and potential cures for human diseases,” says Cheng.

Here’s a link to and citation for the paper by the Johns Hopkins researchers,

Efficient and Allele-Specific Genome Editing of Disease Loci in Human iPSCs by Cory Smith, Leire Abalde-Atristain, Chaoxia He, Brett R Brodsky, Evan M Braunstein, Pooja Chaudhari, Yoon-Young Jang, Linzhao Cheng and Zhaohui Ye. Molecular Therapy (24 November 2014) | doi:10.1038/mt.2014.226

This paper is behind a paywall.

Not mentioned in the Johns Hopkins Medicine news release is a brewing patent battle over the CRISPR technique. A Dec. 31, 2014 post by Glyn Moody for Techdirt lays out the situation (Note: Links have been removed),

Although not many outside the world of the biological sciences have heard of it yet, the CRISPR gene editing technique may turn out to be one of the most important discoveries of recent years — if patent battles don’t ruin it. Technology Review describes it as:

    an invention that may be the most important new genetic engineering technique since the beginning of the biotechnology age in the 1970s. The CRISPR system, dubbed a “search and replace function” for DNA, lets scientists easily disable genes or change their function by replacing DNA letters. During the last few months, scientists have shown that it’s possible to use CRISPR to rid mice of muscular dystrophy, cure them of a rare liver disease, make human cells immune to HIV, and genetically modify monkeys.

Unfortunately, rivalry between scientists claiming the credit for key parts of CRISPR threatens to spill over into patent litigation …

Moody describes three scientists vying for control via their patents,

[A researcher at the MIT-Harvard Broad Institute, Feng] Zhang cofounded Editas Medicine, and this week the startup announced that it had licensed his patent from the Broad Institute. But Editas doesn’t have CRISPR sewn up.

That’s because [Jennifer] Doudna, a structural biologist at the University of California, Berkeley, was a cofounder of Editas, too. And since Zhang’s patent came out, she’s broken off with the company, and her intellectual property — in the form of her own pending patent — has been licensed to Intellia, a competing startup unveiled only last month.

Making matters still more complicated, [another CRISPR researcher, Emmanuelle] Charpentier sold her own rights in the same patent application to CRISPR Therapeutics.

Moody notes,

Whether obvious or not, it looks like the patent granted may complicate turning the undoubtedly important CRISPR technique into products. That, in its turn, will mean delays for life-changing and even life-saving therapies: for example, CRISPR could potentially allow the defective gene that causes serious problems for those with cystic fibrosis to be edited to produce normal proteins, thus eliminating those problems.

It’s dispiriting to think that potentially valuable therapies could be lost to litigation battles particularly since the researchers are academics and their work was funded by taxpayers. In any event, I hope sanity reigns and they are able to avoid actions which will grind research down to a standstill.

Does digitizing material mean it’s safe? A tale of Canada’s Fisheries and Oceans scientific libraries

As has been noted elsewhere the federal government of Canada has shut down a number of Fisheries and Oceans Canada libraries in a cost-saving exercise. The government is hoping to save some $440,000 in the 2014-15 fiscal year by digitizing, consolidating, and discarding the libraries and their holdings.

One would imagine that this is being done in a measured, thoughtful fashion but one would be wrong.

Andrew Nikiforuk in a December 23, 2013 article for The Tyee wrote one of the first articles about the closure of the fisheries libraries,

Scientists say the closure of some of the world’s finest fishery, ocean and environmental libraries by the Harper government has been so chaotic that irreplaceable collections of intellectual capital built by Canadian taxpayers for future generations has been lost forever.

Glyn Moody in a Jan. 7, 2014 post on Techdirt noted this,

What’s strange is that even though the rationale for this mass destruction is apparently in order to reduce costs, opportunities to sell off more valuable items have been ignored. A scientist is quoted as follows:

“Hundreds of bound journals, technical reports and texts still on the shelves, presumably meant for the garbage or shredding. I saw one famous monograph on zooplankton, which would probably fetch a pretty penny at a used science bookstore… anybody could go in and help themselves, with no record kept of who got what.”

Gloria Galloway in a Jan. 7, 2014 article for the Globe and Mail adds more details about what has been lost,

Peter Wells, an adjunct professor and senior research fellow at the International Ocean Institute at Dalhousie University in Halifax, said it is not surprising few members of the public used the libraries. But “the public benefits by the researchers and the different research labs being able to access the information,” he said.

Scientists say it is true that most modern research is done online.

But much of the material in the DFO libraries was not available digitally, Dr. Wells said, adding that some of it had great historical value. And some was data from decades ago that researchers use to determine how lakes and rivers have changed.

“I see this situation as a national tragedy, done under the pretext of cost savings, which, when examined closely, will prove to be a false motive,” Dr. Wells said. “A modern democratic society should value its information resources, not reduce, or worse, trash them.”

Dr. Ayles [Burton Ayles, a former DFO regional director and the former director of science for the Freshwater Institute in Winnipeg] said the Freshwater Institute had reports from the 1880s and some that were available nowhere else. “There was a whole core people who used that library on a regular basis,” he said.

Dr. Ayles pointed to a collection of three-ringed binders, occupying seven metres of shelf space, that contained the data collected during a study in the 1960s and 1970s of the proposed Mackenzie Valley pipeline. For a similar study in the early years of this century, he said, “scientists could go back to that information and say, ‘What was the baseline 30 years ago? What was there then and what is there now?’ ”

When asked how much of the discarded information has been digitized, the government did not provide an answer, but said the process continues.

Today, Margo McDiarmid’s Jan. 30, 2014 article for the Canadian Broadcasting Corporation (CBC) news online further explores digitization of the holdings,

Fisheries and Oceans is closing seven of its 11 libraries by 2015. It’s hoping to save more than $443,000 in 2014-15 by consolidating its collections into four remaining libraries.

Shea [Fisheries and Oceans Minister Gail Shea] told CBC News in a statement Jan. 6 that all copyrighted material has been digitized and the rest of the collection will be soon. The government says that putting material online is a more efficient way of handling it.

But documents from her office show there’s no way of really knowing that is happening.

“The Department of Fisheries and Oceans’ systems do not enable us to determine the number of items digitized by location and collection,” says the response by the minister’s office to MacAulay’s inquiry. [emphasis mine]

The documents also that show the department had to figure out what to do with 242,207 books and research documents from the libraries being closed. It kept 158,140 items and offered the remaining 84,067 to libraries outside the federal government.

Shea’s office told CBC that the books were also “offered to the general public and recycled in a ‘green fashion’ if there were no takers.”

The fate of thousands of books appears to be “unknown,” although the documents’ numbers show 160 items from the Maurice Lamontagne Library in Mont Jolie, Que., were “discarded.”  A Radio-Canada story in June about the library showed piles of volumes in dumpsters.

And the numbers prove a lot more material was tossed out. The bill to discard material from four of the seven libraries totals $22,816.76

Leaving aside the issue of whether or not rare books were given away or put in dumpsters, It’s not confidence-building when the government minister can’t offer information about which books have been digitized and where they might located online.

Interestingly,  Fisheries and Oceans is not the only department/ministry shutting down libraries (from McDiarmid’s CBC article),

Fisheries and Oceans is just one of the 14 federal departments, including Health Canada and Environment Canada, that have been shutting physical libraries and digitizing or consolidating the material into closed central book vaults.

I was unaware of the problems with Health Canada’s libraries but Laura Payton’s and Max Paris’ Jan. 20, 2014 article for CBC news online certainly raised my eyebrows,

Health Canada scientists are so concerned about losing access to their research library that they’re finding workarounds, with one squirrelling away journals and books in his basement for colleagues to consult, says a report obtained by CBC News.

The draft report from a consultant hired by the department warned it not to close its library, but the report was rejected as flawed and the advice went unheeded.

Before the main library closed, the inter-library loan functions were outsourced to a private company called Infotrieve, the consultant wrote in a report ordered by the department. The library’s physical collection was moved to the National Science Library on the Ottawa campus of the National Research Council last year.

“Staff requests have dropped 90 per cent over in-house service levels prior to the outsource. This statistic has been heralded as a cost savings by senior HC [Health Canada] management,” the report said.

“However, HC scientists have repeatedly said during the interview process that the decrease is because the information has become inaccessible — either it cannot arrive in due time, or it is unaffordable due to the fee structure in place.”

….

The report noted the workarounds scientists used to overcome their access problems.

Mueller [Dr. Rudi Mueller, who left the department in 2012] used his contacts in industry for scientific literature. He also went to university libraries where he had a faculty connection.

The report said Health Canada scientists sometimes use the library cards of university students in co-operative programs at the department.

Unsanctioned libraries have been created by science staff.

“One group moved its 250 feet of published materials to an employee’s basement. When you need a book, you email ‘Fred,’ and ‘Fred’ brings the book in with him the next day,” the consultant wrote in his report.

“I think it’s part of being a scientist. You find a way around the problems,” Mueller told CBC News.

Unsanctioned, underground libraries aside, the assumption that digitizing documents and books ensures access is false.  Glyn Moody in a Nov. 12, 2013 article for Techdirt gives a chastening example of how vulnerable our digital memories are,

The Internet Archive is the world’s online memory, holding the only copies of many historic (and not-so-historic) Web pages that have long disappeared from the Web itself.

Bad news:

This morning at about 3:30 a.m. a fire started at the Internet Archive’s San Francisco scanning center.

Good news:

no one was hurt and no data was lost. Our main building was not affected except for damage to one electrical run. This power issue caused us to lose power to some servers for a while.

Bad news:

Some physical materials were in the scanning center because they were being digitized, but most were in a separate locked room or in our physical archive and were not lost. Of those materials we did unfortunately lose, about half had already been digitized. We are working with our library partners now to assess.

That loss is unfortunate, but imagine if the fire had been in the main server room holding the Internet Archive’s 2 petabytes of data. Wisely, the project has placed copies at other locations …

That’s good to know, but it seems rather foolish for the world to depend on the Internet Archive always being able to keep all its copies up to date, especially as the quantity of data that it stores continues to rise. This digital library is so important in historical and cultural terms: surely it’s time to start mirroring the Internet Archive around the world in many locations, with direct and sustained support from multiple governments.

In addition to the issue of vulnerability, there’s also the issue of authenticity, from my June 5, 2013 posting about science, archives and memories,

… Luciana Duranti [Professor and Chair, MAS {Master of Archival Studies}Program at the University of British Columbia and Director, InterPARES] and her talk titled, Trust and Authenticity in the Digital Environment: An Increasingly Cloudy Issue, which took place in Vancouver (Canada) last year (mentioned in my May 18, 2012 posting).

Duranti raised many, many issues that most of us don’t consider when we blithely store information in the ‘cloud’ or create blogs that turn out to be repositories of a sort (and then don’t know what to do with them; ça c’est moi). She also previewed a Sept. 26 – 28, 2013 conference to be hosted in Vancouver by UNESCO (United Nations Educational, Scientific, and Cultural Organization), “Memory of the World in the Digital Age: Digitization and Preservation.” (UNESCO’s Memory of the World programme hosts a number of these themed conferences and workshops.)

The Sept. 2013 UNESCO ‘memory of the world’ conference in Vancouver seems rather timely in retrospect. The Council of Canadian Academies (CCA) announced that Dr. Doug Owram would be chairing their Memory Institutions and the Digital Revolution assessment (mentioned in my Feb. 22, 2013 posting; scroll down 80% of the way) and, after checking recently, I noticed that the Expert Panel has been assembled and it includes Duranti. Here’s the assessment description from the CCA’s ‘memory institutions’ webpage,

Library and Archives Canada has asked the Council of Canadian Academies to assess how memory institutions, which include archives, libraries, museums, and other cultural institutions, can embrace the opportunities and challenges of the changing ways in which Canadians are communicating and working in the digital age.

Background

Over the past three decades, Canadians have seen a dramatic transformation in both personal and professional forms of communication due to new technologies. Where the early personal computer and word-processing systems were largely used and understood as extensions of the typewriter, advances in technology since the 1980s have enabled people to adopt different approaches to communicating and documenting their lives, culture, and work. Increased computing power, inexpensive electronic storage, and the widespread adoption of broadband computer networks have thrust methods of communication far ahead of our ability to grasp the implications of these advances.

These trends present both significant challenges and opportunities for traditional memory institutions as they work towards ensuring that valuable information is safeguarded and maintained for the long term and for the benefit of future generations. It requires that they keep track of new types of records that may be of future cultural significance, and of any changes in how decisions are being documented. As part of this assessment, the Council’s expert panel will examine the evidence as it relates to emerging trends, international best practices in archiving, and strengths and weaknesses in how Canada’s memory institutions are responding to these opportunities and challenges. Once complete, this assessment will provide an in-depth and balanced report that will support Library and Archives Canada and other memory institutions as they consider how best to manage and preserve the mass quantity of communications records generated as a result of new and emerging technologies.

The Council’s assessment is running concurrently with the Royal Society of Canada’s expert panel assessment on Libraries and Archives in 21st century Canada. Though similar in subject matter, these assessments have a different focus and follow a different process. The Council’s assessment is concerned foremost with opportunities and challenges for memory institutions as they adapt to a rapidly changing digital environment. In navigating these issues, the Council will draw on a highly qualified and multidisciplinary expert panel to undertake a rigorous assessment of the evidence and of significant international trends in policy and technology now underway. The final report will provide Canadians, policy-makers, and decision-makers with the evidence and information needed to consider policy directions. In contrast, the RSC panel focuses on the status and future of libraries and archives, and will draw upon a public engagement process.

So, the government is shutting down libraries in order to save money and they’re praying (?) that the materials have been digitized and adequate care has been taken to ensure that they will not be lost in some disaster or other. Meanwhile the Council of Canadian Academies is conducting an assessment of memory institutions in the digital age. The approach seems to backwards.

On a more amusing note, Rick Mercer parodies at lease one way scientists are finding to circumvent the cost-cutting exercise in an excerpt (approximately 1 min.)  from his Jan. 29, 2014 Rick Mercer Report telecast (thanks Roz),

Mercer’s comment about sports and Canada’s Prime Minister, Stephen Harper’s preferences is a reference to Harper’s expressed desire to write a book about hockey and possibly a veiled reference to Harper’s successful move to prorogue parliament during the 2010 Winter Olympic games in Vancouver in what many observers suggested was a strategy allowing Harper to attend the games at his leisure.

Whether or not you agree with the decision to shutdown some libraries, the implementation seems to have been a remarkably sloppy affair.

Late to the Stand Up for Science party/protest of Sept. 16, 2013

It’s not the first time I’ve missed a party and I have to say thank you to my US colleagues (David Bruggeman’s Oct. 3, 2013 posting on his Pasco Phronesis blog and Glyn Moody’s Oct. 3, 2013 Techdirt posting) for insuring I found out about the Sept. 16, 2013 series of cross Canada protest rallies, Stand Up for Science, regarding the ‘muzzling’ of science communication in Canada.

Suzanne Goldenberg’s Sept. 16, 2013 article for the Guardian provides a good overview of the situation. I have excerpted the bits that are new to me (Note: Links have been removed),

Researchers in 16 Canadian cities have called protests on Monday against science policies introduced under the government of Stephen Harper, which include rules barring government researchers from talking about their own work with journalists and, in some cases, even fellow researchers.

“There a lot of concern in Canada right now about government scientists not being allowed to speak about their research to the public because of the new communications policies being put into place,” said Katie Gibbs, director of a new group, Evidence for Democracy, which is organising the protests.

This year, [2013] Canada’s department of fisheries and oceans released a new set of rules barring scientists from discussing their findings with the public or publishing in academic journals.[emphasis mine]

The new guidelines required all scientists to submit papers to a departmental manager for review – even after they had been accepted for publication by an academic journal.

The proposed rules became public earlier this year after American scientists on a joint US-Canadian project in the eastern Arctic took exception at the new conditions.

The government was accused this month [Sept. 2013] of delaying its annual report on greenhouse gas emissions – usually released in mid-summer – because it was universally expected to show a double-digit rise in carbon pollution.

The government is actually trying to bar people from having their work published in academic journals? Well, brava to Katie Gibbs and Evidence for Democracy for organizing the Sept. 16, 2013 rallies and their predecessor, the Death of Evidence Rally (for more info. about that previous event there’s my July 10, 2012 posting announcing and discussing the ‘Death of Evidence’ and my July 13, 2013 posting which featured a roundup of comments regarding the 2012 rally).

Interestingly the Evidence for Democracy’s (E4D) Board of Directors seems to be largely comprised of biologists (from the Who We Are webpage),

E4D’s Board of Directors

Katie Gibbs

Dr. Gibbs recently completed a PhD in Biology from the University of Ottawa and has a diverse background organizing and managing various causes and campaigns.

Scott Findlay

Dr.Findlay is an Associate Professor of Biology at the University of Ottawa and former Director of the University of Ottawa’s Institute of the Environment.

Kathryn O’Hara

Ms. O’Hara is an Associate Professor of Journalism at Carleton University and holds the CTV Chair in Science Broadcast Journalism.

Susan Pinkus

Ms. Pinkus has a M.Sc. in conservation biology and community ecology and is a senior staff scientist at Ecojustice.

By pointing out the concentration of biologists within the E4D board, I’m trying to hint at the difficulty of communicating across disciplinary boundaries (biologists network with other biologists partly because it’s easier to find people who belong to the same organizations and attend the same conferences). When Canada’s geography is also taken into account, the fact that the group managed to organize events in 17 cities (also listed on the E4D Stand Up for Science webpage; scroll down about 40% of the page) across the country), according to Ivan Semeniuk’s Sept. 16, 2013, article about the rallies for the Globe and Mail newspaper, becomes quite laudable.

Matthew Robinson’s Sept. 17, 2013 article for the Vancouver Sun gives a BC (the city of Vancouver is located in the province of British Columbia,Canada) flavour to the proceedings,

David Suzuki [biologist], Alexandra Morton [biologist] and other prominent B.C.-based scientists rallied on the steps of the Vancouver Art Gallery Monday to decry what they called the continued muzzling of federal scientists and to reason for broadened science funding. ….

Separate from the protests, NDP science and technology critic Kennedy Stewart [Member of Parliament from BC] tabled Monday [Sept. 16, 2013] a motion for federal departments to permit scientists to speak freely to the media and the public.

The motion would, among other things, allow federal scientists to present personal viewpoints and prohibit elected officials, ministerial staff and communications officers from directing scientists to suppress or alter their findings.

Gibbs said the timing of the NDP motion was not co-ordinated.

Meanwhile, another group, Scientists for the Right to Know, has been emerging. From the About us page,

In 2012, a Working Group of Science for Peace started to look into the muzzling of science and scientists in Canada. Muzzling is a broad process that may be carried out by governments, industry, universities, and others. However, we quickly realized that the current federal government is actually waging a war on basic science. While other Canadian governments have engaged in muzzling as well, we have never witnessed the type of systematic attack on basic science that is happening right now in Canada.

We therefore decided to focus at present on the muzzling of science on the part of the federal government. We also decided that we needed to find a means to engage the public at large. The focus of our work shifted, then, from researching the issue to advocating for unmuzzled science. It became clear that the work the group was envisaging would exceed the mandate of Science for Peace –  education. We decided to form a new organization frankly devoted to advocacy.

The inaugural meeting of Scientists for the Right to Know took place in April 2013. We are currently in the process of incorporating as a non-profit organization. …

 

The organization has an executive comprised of three people (from the About us page),

 

President – Margrit Eichler

 

Treasurer – Phyllis Creighton

 

Secretary – Sue Kralik

 

Margrit Eichler is Professor Emerita of OISE/UT. She received her PhD in Sociology from Duke University.  She was elected a Fellow of the Royal Society of Canada and of the European Academy of Sciences, and  she received an honorary doctorate from Brock University. She has remained an activist during her entire  academic career.

 

Phyllis Creighton is a translations editor with the renowned Dictionary of Canadian Biography/Dictionnaire biographique du Canada. She holds an MA in history from the University of Toronto. An ethicist and author — and Raging Granny–, she has long worked for peace, nuclear disarmament, human rights, social justice, conservation, and environmental protection. She holds the Anglican Award of Merit, the Order of Ontario, and the Queen’s Diamond Jubilee Medal.

 

Sue Kralik is a graduate of the University of Western Ontario and earned a Master of Education degree at the University of Toronto. Sue recently retired as a school Principal. While working as a Principal, Sue led school based anti-war and social justice initiatives and remains committed to working for peace, social justice, and respect for the environment.

Interestingly, Scientists for the Right to Know has sprung forth from the Humanities and Social Sciences communities.

It’s  an exciting time for Canadian science culture;, I just wish the communication between these groups and other interested groups and individuals was better. That way, I (and, I suspect, other Canadian science bloggers) wouldn’t be left wondering how they managed to miss significant events such as the inception of the Evidence for Science group,and of the  Scientists for the Right to Know group, and the Stand Up for Science cross Canada rally.

Memories, science, archiving, and authenticity

This is going to be one of my more freewheeling excursions into archiving and memory. I’ll be starting with  a movement afoot in the US government to give citizens open access to science research moving onto a network dedicated to archiving nanoscience- and nanotechnology-oriented information, examining the notion of authenticity in regard to the Tiananmen Square incident on June 4, 1989, and finishing with the Council of Canadian Academies’ Expert Panel on Memory Institutions and the Digital Revolution.

In his June 4, 2013 posting on the Pasco Phronesis blog, David Bruggeman features information and an overview of  the US Office of Science and Technology Policy’s efforts to introduce open access to science research for citizens (Note: Links have been removed),

Back in February, the Office of Science and Technology Policy (OSTP) issued a memorandum to federal science agencies on public access for research results.  Federal agencies with over $100 million in research funding have until August 22 to submit their access plans to OSTP.  This access includes research publications, metadata on those publications, and underlying research data (in a digital format).

A collection of academic publishers, including the Association of American Publishers and the organization formerly known as the American Association for the Advancement of Science (publisher of Science), has offered a proposal for a publishing industry repository for pubic access to federally funded research that they publish.

David provides a somewhat caustic perspective on the publishers’ proposal while Jocelyn Kaiser’s June 4, 2013 article for ScienceInsider details the proposal in more detail (Note: Links have been removed),

Organized in part by the Association of American Publishers (AAP), which represents many commercial and nonprofit journals, the group calls its project the Clearinghouse for the Open Research of the United States (CHORUS). In a fact sheet that AAP gave to reporters, the publishers describe CHORUS as a “framework” that would “provide a full solution for agencies to comply with the OSTP memo.”

As a starting point, the publishers have begun to index papers by the federal grant numbers that supported the work. That index, called FundRef, debuted in beta form last week. You can search by agency and get a list of papers linked to the journal’s own websites through digital object identifiers (DOIs), widely used ID codes for individual papers. The pilot project involved just a few agencies and publishers, but many more will soon join FundRef, says Fred Dylla, executive director of the American Institute of Physics. (AAAS, which publishes ScienceInsider, is among them and has also signed on to CHORUS.)

The next step is to make the full-text papers freely available after agencies decide on embargo dates, Dylla says. (The OSTP memo suggests 12 months but says that this may need to be adjusted for some fields and journals.) Eventually, the full CHORUS project will also allow searches of the full-text articles. “We will make the corpus available for anybody’s search tool,” says Dylla, who adds that search agreements will be similar to those that publishers already have with Google Scholar and Microsoft Academic Search.

I couldn’t find any mention in Kaiser’s article as to how long the materials would be available. Is this supposed to be an archive, as well as, a repository? Regardless, I found the beta project, FundRef, a little confusing. The link from the ScienceInsider article takes you to this May 28, 2013 news release,

FundRef, the funder identification service from CrossRef [crossref.org], is now available for publishers to contribute funding data and for retrieval of that information. FundRef is the result of collaboration between funding agencies and publishers that correlates grants and other funding with the scholarly output of that support.

Publishers participating in FundRef add funding data to the bibliographic metadata they already provide to CrossRef for reference linking. FundRef data includes the name of the funder and a grant or award number. Manuscript tracking systems can incorporate a taxonomy of 4000 global funder names, which includes alternate names, aliases, and abbreviations enabling authors to choose from a standard list of funding names. Then the tagged funding data will travel through publishers’ production systems to be stored at CrossRef.

I was hoping that clicking on the FundRef button would take me to a database that I could test or tour. At this point, I wouldn’t have described the project as being at the beta stage (from a user’s perspective) as they are still building it and gathering data. However, there is lots of information on the FundRef webpage including an Additional Resources section featuring a webinar,

Attend an Introduction to FundRef Webinar – Thursday, June 6, 2013 at 11:00 am EDT

You do need to sign up for the webinar. Happily, it is open to international participants, as well as, US participants.

Getting back to my question on whether or not this effort is also an archive of sorts, there is a project closer to home (nanotechnologywise, anyway) that touches on these issues from an unexpected perspective, from the Nanoscience and Emerging Technologies in Society (NETS); sharing research and learning tools About webpage,

The Nanoscience and Emerging Technologies in Society: Sharing Research and Learning Tools (NETS) is an IMLS-funded [Institute of Museum and Library Services] project to investigate the development of a disciplinary repository for the Ethical, Legal and Social Implications (ELSI) of nanoscience and emerging technologies research. NETS partners will explore future integration of digital services for researchers studying ethical, legal, and social implications associated with the development of nanotechnology and other emerging technologies.

NETS will investigate digital resources to advance the collection, dissemination, and preservation of this body of research,  addressing the challenge of marshaling resources, academic collaborators, appropriately skilled data managers, and digital repository services for large-scale, multi-institutional and disciplinary research projects. The central activity of this project involves a spring 2013 workshop that will gather key researchers in the field and digital librarians together to plan the development of a disciplinary repository of data, curricula, and methodological tools.

Societal dimensions research investigating the impacts of new and emerging technologies in nanoscience is among the largest research programs of its kind in the United States, with an explicit mission to communicate outcomes and insights to the public. By 2015, scholars across the country affiliated with this program will have spent ten years collecting qualitative and quantitative data and developing analytic and methodological tools for examining the human dimensions of nanotechnology. The sharing of data and research tools in this field will foster a new kind of social science inquiry and ensure that the outcomes of research reach public audiences through multiple pathways.

NETS will be holding a stakeholders workshop June 27 – 28, 2013 (invite only), from the workshop description webpage,

What is the value of creating a dedicated Nano ELSI repository?
The benefits of having these data in a shared infrastructure are: the centralization of research and ease of discovery; uniformity of access; standardization of metadata and the description of projects; and facilitation of compliance with funder requirements for data management going forward. Additional benefits of this project will be the expansion of data curation capabilities for data repositories into the nanotechnology domain, and research into the development of disciplinary repositories, for which very little literature exists.

What would a dedicated Nano ELSI repository contain?
Potential materials that need to be curated are both qualitative and quantitative in nature, including:

  • survey instruments, data, and analyses
  • interview transcriptions and analyses
  • images or multimedia
  • reports
  • research papers, books, and their supplemental data
  • curricular materials

What will the Stakeholder Workshop accomplish?
The Stakeholder Workshop aims to bring together the key researchers and digital librarians to draft a detailed project plan for the implementation of a dedicated Nano ELSI repository. The Workshop will be used as a venue to discuss questions such as:

  • How can a repository extend research in this area?
  • What is the best way to collect all the research in this area?
  • What tools would users envision using with this resource?
  • Who should maintain and staff a repository like this?
  • How much would a repository like this cost?
  • How long will it take to implement?

What is expected of Workshop participants?
The workshop will bring together key researchers and digital librarians to discuss the requirements for a dedicated Nano ELSI repository. To inform that discussion, some participants will be requested to present on their current or past research projects and collaborations. In addition, workshop participants will be enlisted to contribute to the draft of the final project report and make recommendations for the implementation plan.

While my proposal did not get accepted (full disclosure), I do look forward to hearing more about the repository although I notice there’s no mention made of archiving the materials.

The importance of repositories and archives was brought home to me when I came across a June 4, 2013 article by Glyn Moody for Techdirt about the Tiananmen Square incident and subtle and unsubtle ways of censoring access to information,

Today is June 4th, a day pretty much like any other day in most parts of the world. But in China, June 4th has a unique significance because of the events that took place in Tiananmen Square on that day in 1989.

Moody recounts some of the ways in which people have attempted to commemorate the day online while evading the authorities’ censorship efforts. Do check out the article for the inside scoop on why ‘Big Yellow Duck’ is a censored term. One of the more subtle censorship efforts provides some chills (from the Moody article),

… according to this article in the Wall Street Journal, it looks like the Chinese authorities are trying out a new tactic for handling this dangerous topic:

On Friday, a China Real Time search for “Tiananmen Incident” did not return the customary message from Sina informing the user that search results could not be displayed due to “relevant laws, regulations and policies.” Instead the search returned results about a separate Tiananmen incident that occurred on Tomb Sweeping Day in 1976, when Beijing residents flooded the area to protest after they were prevented from mourning the recently deceased Premiere [sic] Zhou Enlai.

This business of eliminating and substituting a traumatic and disturbing historical event with something less contentious reminded me both of the saying ‘history is written by the victors’ and of Luciana Duranti and her talk titled, Trust and Authenticity in the Digital Environment: An Increasingly Cloudy Issue, which took place in Vancouver (Canada) last year (mentioned in my May 18, 2012 posting).

Duranti raised many, many issues that most of us don’t consider when we blithely store information in the ‘cloud’ or create blogs that turn out to be repositories of a sort (and then don’t know what to do with them; ça c’est moi). She also previewed a Sept. 26 – 28, 2013 conference to be hosted in Vancouver by UNESCO [United Nations Educational, Scientific, and Cultural Organization), “Memory of the World in the Digital Age: Digitization and Preservation.” (UNESCO’s Memory of the World programme hosts a number of these themed conferences and workshops.)

The Sept. 2013 UNESCO ‘memory of the world’ conference in Vancouver seems rather timely in retrospect. The Council of Canadian Academies (CCA) announced that Dr. Doug Owram would be chairing their Memory Institutions and the Digital Revolution assessment (mentioned in my Feb. 22, 2013 posting; scroll down 80% of the way) and, after checking recently, I noticed that the Expert Panel has been assembled and it includes Duranti. Here’s the assessment description from the CCA’s ‘memory institutions’ webpage,

Library and Archives Canada has asked the Council of Canadian Academies to assess how memory institutions, which include archives, libraries, museums, and other cultural institutions, can embrace the opportunities and challenges of the changing ways in which Canadians are communicating and working in the digital age.
Background

Over the past three decades, Canadians have seen a dramatic transformation in both personal and professional forms of communication due to new technologies. Where the early personal computer and word-processing systems were largely used and understood as extensions of the typewriter, advances in technology since the 1980s have enabled people to adopt different approaches to communicating and documenting their lives, culture, and work. Increased computing power, inexpensive electronic storage, and the widespread adoption of broadband computer networks have thrust methods of communication far ahead of our ability to grasp the implications of these advances.

These trends present both significant challenges and opportunities for traditional memory institutions as they work towards ensuring that valuable information is safeguarded and maintained for the long term and for the benefit of future generations. It requires that they keep track of new types of records that may be of future cultural significance, and of any changes in how decisions are being documented. As part of this assessment, the Council’s expert panel will examine the evidence as it relates to emerging trends, international best practices in archiving, and strengths and weaknesses in how Canada’s memory institutions are responding to these opportunities and challenges. Once complete, this assessment will provide an in-depth and balanced report that will support Library and Archives Canada and other memory institutions as they consider how best to manage and preserve the mass quantity of communications records generated as a result of new and emerging technologies.

The Council’s assessment is running concurrently with the Royal Society of Canada’s expert panel assessment on Libraries and Archives in 21st century Canada. Though similar in subject matter, these assessments have a different focus and follow a different process. The Council’s assessment is concerned foremost with opportunities and challenges for memory institutions as they adapt to a rapidly changing digital environment. In navigating these issues, the Council will draw on a highly qualified and multidisciplinary expert panel to undertake a rigorous assessment of the evidence and of significant international trends in policy and technology now underway. The final report will provide Canadians, policy-makers, and decision-makers with the evidence and information needed to consider policy directions. In contrast, the RSC panel focuses on the status and future of libraries and archives, and will draw upon a public engagement process.

Question

How might memory institutions embrace the opportunities and challenges posed by the changing ways in which Canadians are communicating and working in the digital age?

Sub-questions

With the use of new communication technologies, what types of records are being created and how are decisions being documented?
How is information being safeguarded for usefulness in the immediate to mid-term across technologies considering the major changes that are occurring?
How are memory institutions addressing issues posed by new technologies regarding their traditional roles in assigning value, respecting rights, and assuring authenticity and reliability?
How can memory institutions remain relevant as a trusted source of continuing information by taking advantage of the collaborative opportunities presented by new social media?

From the Expert Panel webpage (go there for all the links), here’s a complete listing of the experts,

Expert Panel on Memory Institutions and the Digital Revolution

Dr. Doug Owram, FRSC, Chair
Professor and Former Deputy Vice-Chancellor and Principal, University of British Columbia Okanagan Campus (Kelowna, BC)

Sebastian Chan     Director of Digital and Emerging Media, Smithsonian Cooper-Hewitt National Design Museum (New York, NY)

C. Colleen Cook     Trenholme Dean of Libraries, McGill University (Montréal, QC)

Luciana Duranti   Chair and Professor of Archival Studies, the School of Library, Archival and Information Studies at the University of British Columbia (Vancouver, BC)

Lesley Ellen Harris     Copyright Lawyer; Consultant, Author, and Educator; Owner, Copyrightlaws.com (Washington, D.C.)

Kate Hennessy     Assistant Professor, Simon Fraser University, School of Interactive Arts and Technology (Surrey, BC)

Kevin Kee     Associate Vice-President Research (Social Sciences and Humanities) and Canada Research Chair in Digital Humanities, Brock University (St. Catharines, ON)

Slavko Manojlovich     Associate University Librarian (Information Technology), Memorial University of Newfoundland (St. John’s, NL)

David Nostbakken     President/CEO of Nostbakken and Nostbakken, Inc. (N + N); Instructor of Strategic Communication and Social Entrepreneurship at the School of Journalism and Communication, Carleton University (Ottawa, ON)

George Oates     Art Director, Stamen Design (San Francisco, CA)

Seamus Ross     Dean and Professor, iSchool, University of Toronto (Toronto, ON)

Bill Waiser, SOM, FRSC     Professor of History and A.S. Morton Distinguished Research Chair, University of Saskatchewan (Saskatoon, SK)

Barry Wellman, FRSC     S.D. Clark Professor, Department of Sociology, University of Toronto (Toronto, ON)

I notice they have a lawyer whose specialty is copyright, Lesley Ellen Harris. I did check out her website, copyrightlaws.com and could not find anything that hinted at any strong opinions on the topic. She seems to feel that copyright is a good thing but how far she’d like to take this is a mystery to me based on the blog postings I viewed.

I’ve also noticed that this panel has 13 people, four of whom are women which equals a little more (June 5, 2013, 1:35 pm PDT, I substituted the word ‘less’ for the word ‘more’; my apologies for the arithmetic error) than 25% representation. That’s a surprising percentage given how heavily weighted the fields of library and archival studies are weighted towards women.

I have meandered somewhat but my key points are this:

  • How we are going to keep information available? It’s all very well to have repository but how long will the data be kept in the repository and where does it go afterwards?
  • There’s a bias certainly with the NETS workshop and, likely, the CCA Expert Panel on Memory Institutions and the Digital Revolution toward institutions as the source for information that’s worth keeping for however long or short a time that should be. What about individual efforts? e.g. Don’t Leave Canada Behind ; FrogHeart; Techdirt; The Last Word on Nothing, and many other blogs?
  • The online redirection of Tiananmen Square incident queries is chilling but I’ve often wondered what happen if someone wanted to remove ‘objectionable material’ from an e-book, e.g. To Kill a Mockingbird. A new reader wouldn’t notice the loss if the material has been excised in a subtle or professional  fashion.

As for how this has an impact on science, it’s been claimed that Isaac Newton attempted to excise Robert Hooke from history (my Jan. 19, 2012 posting). Whether it’s true or not, there is remarkably little about Robert Hooke despite his accomplishments and his languishment is a reminder that we must always take care that we retain our memories.

ETA June 6, 2013: David Bruggeman added some more information links about CHORUS in his June 5, 2013 post (On The Novelty Of Corporate-Government Partnership In STEM Education),

Before I dive into today’s post, a brief word about CHORUS. Thanks to commenter Joe Kraus for pointing me to this Inside Higher Ed post, which includes a link to the fact sheet CHORUS organizers distributed to reporters. While there are additional details, there are still not many details to sink one’s teeth in. And I remain surprised at the relative lack of attention the announcement has received. On a related note, nobody who’s been following open access should be surprised by Michael Eisen’s reaction to CHORUS.

I encourage you to check out David’s post as he provides some information about a new STEM (science, technology, engineering, mathematics) collaboration between the US National Science Foundation and companies such as GE and Intel.

Opening it all up (open software, Nature, and Naked Science)

I’m coming back to the ‘open access’ well this week since there’ve been a few new developments since my massive May 28, 2012 posting on the topic.

A June 5, 2012 posting by Glyn Moody at the Techdirt website brought yet another aspect of ‘open access’ to my attention,

Computers need software, and some of that software will be specially written or adapted from existing code to meet the particular needs of the scientists’ work. This makes computer software a vital component of the scientific process. It also means that being able to check that code for errors is as important as being able to check the rest of the experiment’s methodology. And yet very rarely can other scientists do that, because the code employed is not made available.

That’s right,  there’s open access scientific software.

Meanwhile over at the Guardian newspaper website, Paul Campbell, Nature journal’s editor-in-chief,  notes that open access to research is inevitable in a June 8, 2012 article by Alok Jha,

Open access to scientific research articles will “happen in the long run”, according to the editor-in-chief of Nature, one of the world’s premier scientific journals.

Philip Campbell said that the experience for readers and researchers of having research freely available is “very compelling”. But other academic publishers said that any large-scale transition to making research freely available had to take into account the value and investments they added to the scientific process.

“My personal belief is that that’s what’s going to happen in the long run,” said Campbell. However, he added that the case for open access was stronger for some disciplines, such as climate research, than others.

Campbell was speaking at a briefing hosted by the Science Media Centre.  Interestingly, ScienceOnline Vancouver’s upcoming (June 12, 2012, 6:30 pm mingling starts, 7-9 pm PDT for the panel discussion) meeting about open access (titled, Naked Science; Excuse me: your science is showing) features a speaker from Canada’s Science Media Centre (from the event page),

  1. Heather Piwowar is a postdoc with Duke University and the Dept of Zoology at UBC.  She’s a researcher on the NSF-funded DataONE and Dryad projects, studying data.  Specifically, how, when, and why do scientists publicly archive the datasets they collect?  When do they reuse the data of others?  What related policies and tools would help facilitate more efficient and effective use of data resources?  Heather is also a co-founder of total-impact, a web application that reveals traditional and non-traditional impact metrics of scholarly articles, datasets, software, slides, and blog posts.
  2. Heather Morrison is a Vancouver-based, well-known international open access advocate and practitioner of open scholarship, through her blogs The Imaginary Journal of Poetic Economics http://poeticeconomics.blogspot.com and her dissertation-blog http://pages.cmns.sfu.ca/heather-morrison/
  3. Lesley Evans Ogden is a freelance science journalist and the Vancouver media officer for the Science Media Centre of Canada. In the capacity of freelance journalist, she is a contributing science writer at Natural History magazine, and has written for a variety of publications including YES Mag, Scientific American (online), The Guardian, Canadian Running, and Bioscience. She has a PhD in wildlife ecology, and spent more than a decade slogging through mud and climbing mountains to study the breeding and winter ecology of migratory birds. She is also an alumni of the Science Communications program at the Banff Centre. (She will be speaking in the capacity of freelance journalist).
  4. Joy Kirchner is the Scholarly Communications Coordinator at University of British Columbia where she heads the University’s developing Copyright office in addition to the Scholarly Communications office based in the Library. Her role involves coordinating the University’s copyright education services, identifying recommended and sustainable service models to support scholarly communication activities on the campus and coordinating formalized discussion and education of these issues with faculty, students, research and publishing constituencies on the UBC campus. Joy has also been instrumental in working with faculty to host their open access journals through the Library’s open access journal hosting program; she was involved in the implementation and content recruitment of the Library’s open access  institutional repository, and she was instrumental in establishing the Provost’s Scholarly Communications Steering Committee and associated working groups where she sits as a key member of the Committee looking into an open access position at UBC amongst other things..  Joy is also chair of UBC’s Copyright Advisory Committee and working groups. She is also a faculty member with the Association of Research Libraries (ARL) / Association of College and Research Libraries (ACRL) Institute for Scholarly Communication, she assists with the coordination and program development of ACRL’s much lauded Scholarly Communications Road Show program, she is a Visiting Program Officer with ACRL in support of their scholarly communications programs, and she is a Fellow with ARL’s Research Library Leadership Fellows executive program (RLLF). Previous positions includes Librarian, for Collections, Licensing & Digital Scholarship (UBC), Electronic Resources Coordinator (Columbia Univ.), Medical & Allied Health Librarian and Science & Engineering Librarian. She holds a BA and an MLIS from the University of British Columbia.

I’m starting to get the impression that there is a concerted communications effort taking place. Between this listing and the one in my May 28, 2012 posting, there are just too many articles and events occurring to be purely chance.