Tag Archives: Elsevier

Tracking artificial intelligence

Researchers at Stanford University have developed an index for measuring (tracking) the progress made by artificial intelligence (AI) according to a January 9, 2018 news item on phys.org (Note: Links have been removed),

Since the term “artificial intelligence” (AI) was first used in print in 1956, the one-time science fiction fantasy has progressed to the very real prospect of driverless cars, smartphones that recognize complex spoken commands and computers that see.

In an effort to track the progress of this emerging field, a Stanford-led group of leading AI thinkers called the AI100 has launched an index that will provide a comprehensive baseline on the state of artificial intelligence and measure technological progress in the same way the gross domestic product and the S&P 500 index track the U.S. economy and the broader stock market.

For anyone curious about the AI100 initiative, I have a description of it in my Sept. 27, 2016 post highlighting the group’s first report or you can keep on reading.

Getting back to the matter at hand, a December 21, 2017 Stanford University press release by Andrew Myers, which originated the news item, provides more detail about the AI index,

“The AI100 effort realized that in order to supplement its regular review of AI, a more continuous set of collected metrics would be incredibly useful,” said Russ Altman, a professor of bioengineering and the faculty director of AI100. “We were very happy to seed the AI Index, which will inform the AI100 as we move forward.”

The AI100 was set in motion three years ago when Eric Horvitz, a Stanford alumnus and former president of the Association for the Advancement of Artificial Intelligence, worked with his wife, Mary Horvitz, to define and endow the long-term study. Its first report, released in the fall of 2016, sought to anticipate the likely effects of AI in an urban environment in the year 2030.

Among the key findings in the new index are a dramatic increase in AI startups and investment as well as significant improvements in the technology’s ability to mimic human performance.

Baseline metrics

The AI Index tracks and measures at least 18 independent vectors in academia, industry, open-source software and public interest, plus technical assessments of progress toward what the authors call “human-level performance” in areas such as speech recognition, question-answering and computer vision – algorithms that can identify objects and activities in 2D images. Specific metrics in the index include evaluations of academic papers published, course enrollment, AI-related startups, job openings, search-term frequency and media mentions, among others.

“In many ways, we are flying blind in our discussions about artificial intelligence and lack the data we need to credibly evaluate activity,” said Yoav Shoham, professor emeritus of computer science.

“The goal of the AI Index is to provide a fact-based measuring stick against which we can chart progress and fuel a deeper conversation about the future of the field,” Shoham said.

Shoham conceived of the index and assembled a steering committee including Ray Perrault from SRI International, Erik Brynjolfsson of the Massachusetts Institute of Technology and Jack Clark from OpenAI. The committee subsequently hired Calvin LeGassick as project manager.

“The AI Index will succeed only if it becomes a community effort,” Shoham said.

Although the authors say the AI Index is the first index to track either scientific or technological progress, there are many other non-financial indexes that provide valuable insight into equally hard-to-quantify fields. Examples include the Social Progress Index, the Middle East peace index and the Bangladesh empowerment index, which measure factors as wide-ranging as nutrition, sanitation, workload, leisure time, public sentiment and even public speaking opportunities.

Intriguing findings

Among the findings of this inaugural index is that the number of active AI startups has increased 14-fold since 2000. Venture capital investment has increased six times in the same period. In academia, publishing in AI has increased a similarly impressive nine times in the last 20 years while course enrollment has soared. Enrollment in the introductory AI-related machine learning course at Stanford, for instance, has grown 45-fold in the last 30 years.

In technical metrics, image and speech recognition are both approaching, if not surpassing, human-level performance. The authors noted that AI systems have excelled in such real-world applications as object detection, the ability to understand and answer questions and classification of photographic images of skin cancer cells

Shoham noted that the report is still very U.S.-centric and will need a greater international presence as well as a greater diversity of voices. He said he also sees opportunities to fold in government and corporate investment in addition to the venture capital funds that are currently included.

In terms of human-level performance, the AI Index suggests that in some ways AI has already arrived. This is true in game-playing applications including chess, the Jeopardy! game show and, most recently, the game of Go. Nonetheless, the authors note that computers continue to lag considerably in the ability to generalize specific information into deeper meaning.

“AI has made truly amazing strides in the past decade,” Shoham said, “but computers still can’t exhibit the common sense or the general intelligence of even a 5-year-old.”

The AI Index was made possible by funding from AI100, Google, Microsoft and Toutiao. Data supporting the various metrics were provided by Elsevier, TrendKite, Indeed.com, Monster.com, the Google Trends Team, the Google Brain Team, Sand Hill Econometrics, VentureSource, Crunchbase, Electronic Frontier Foundation, EuroMatrix, Geoff Sutcliffe, Kevin Leyton-Brown and Holger Hoose.

You can find the AI Index here. They’re featuring their 2017 report but you can also find data (on the menu bar on the upper right side of your screen), along with a few provisos. I was curious as to whether any AI had been used to analyze the data and/or write the report. A very cursory look at the 2017 report did not answer that question. I’m fascinated by the failure to address what I think is an obvious question. It suggests that even very, very bright people can become blind and I suspect that’s why the group seems quite eager to get others involved, from the 2017 AI Index Report,

As the report’s limitations illustrate, the AI Index will always paint a partial picture. For this reason, we include subjective commentary from a cross-section of AI experts. This Expert Forum helps animate the story behind the data in the report and adds interpretation the report lacks.

Finally, where the experts’ dialogue ends, your opportunity to Get Involved begins [emphasis mine]. We will need the feedback and participation of a larger community to address the issues identified in this report, uncover issues we have omitted, and build a productive process for tracking activity and progress in Artificial Intelligence. (p. 8)

Unfortunately, it’s not clear how one becomes involved. Is there a forum or do you get in touch with one of the team leaders?

I wish them good luck with their project and imagine that these minor hiccups will be dealt with in near term.

Using copyright to shut down easy access to scientific research

This started out as a simple post on copyright and publishers vis à vis Sci-Hub but then John Dupuis wrote a think piece (with which I disagree somewhat) on the situation in a Feb. 22, 2016 posting on his blog, Confessions of a Science Librarian. More on Dupuis and my take on it after a description of the situation.

Sci-Hub

Before getting to the controversy and legal suit, here’s a preamble about the purpose for copyright as per the US constitution from Mike Masnick’s Feb. 17, 2016 posting on Techdirt,

Lots of people are aware of the Constitutional underpinnings of our copyright system. Article 1, Section 8, Clause 8 famously says that Congress has the following power:

To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.

We’ve argued at great length over the importance of the preamble of that section, “to promote the progress,” but many people are confused about the terms “science” and “useful arts.” In fact, many people not well-versed in the issue often get the two backwards and think that “science” refers to inventions, and thus enables a patent system, while “useful arts” refers to “artistic works” and thus enables the copyright system. The opposite is actually the case. “Science” at the time the Constitution was written was actually synonymous with “learning” and “education” (while “useful arts” was a term meaning invention and new productivity tools).

While over the centuries, many who stood to benefit from an aggressive system of copyright control have tried to rewrite, whitewash or simply ignore this history, turning the copyright system falsely into a “property” regime, the fact is that it was always intended as a system to encourage the wider dissemination of ideas for the purpose of education and learning. The (potentially misguided) intent appeared to be that by granting exclusive rights to a certain limited class of works, it would encourage the creation of those works, which would then be useful in educating the public (and within a few decades enter the public domain).

Masnick’s preamble leads to a case where Elsevier (Publishers) has attempted to halt the very successful Sci-Hub, which bills itself as “the first pirate website in the world to provide mass and public access to tens of millions of research papers.” From Masnick’s Feb. 17, 2016 posting,

Rightfully, this is being celebrated as a massive boon to science and learning, making these otherwise hidden nuggets of knowledge and science that were previously locked up and hidden away available to just about anyone. And, to be clear, this absolutely fits with the original intent of copyright law — which was to encourage such learning. In a very large number of cases, it is not the creators of this content and knowledge who want the information to be locked up. Many researchers and academics know that their research has much more of an impact the wider it is seen, read, shared and built upon. But the gatekeepers — such as Elsveier and other large academic publishers — have stepped in and demanded copyright, basically for doing very little.

They do not pay the researchers for their work. Often, in fact, that work is funded by taxpayer funds. In some cases, in certain fields, the publishers actually demand that the authors of these papers pay to submit them. The journals do not pay to review the papers either. They outsource that work to other academics for “peer review” — which again, is unpaid. Finally, these publishers profit massively, having convinced many universities that they need to subscribe, often paying many tens or even hundreds of thousands of dollars for subscriptions to journals that very few actually read.

Simon Oxenham of the Neurobonkers blog on the big think website wrote a Feb. 9 (?), 2016 post about Sci-Hub, its originator, and its current legal fight (Note: Links have been removed),

On September 5th, 2011, Alexandra Elbakyan, a researcher from Kazakhstan, created Sci-Hub, a website that bypasses journal paywalls, illegally providing access to nearly every scientific paper ever published immediately to anyone who wants it. …

This was a game changer. Before September 2011, there was no way for people to freely access paywalled research en masse; researchers like Elbakyan were out in the cold. Sci-Hub is the first website to offer this service and now makes the process as simple as the click of a single button.

As the number of papers in the LibGen database expands, the frequency with which Sci-Hub has to dip into publishers’ repositories falls and consequently the risk of Sci-Hub triggering its alarm bells becomes ever smaller. Elbakyan explains, “We have already downloaded most paywalled articles to the library … we have almost everything!” This may well be no exaggeration. Elsevier, one of the most prolific and controversial scientific publishers in the world, recently alleged in court that Sci-Hub is currently harvesting Elsevier content at a rate of thousands of papers per day. Elbakyan puts the number of papers downloaded from various publishers through Sci-Hub in the range of hundreds of thousands per day, delivered to a running total of over 19 million visitors.

In one fell swoop, a network has been created that likely has a greater level of access to science than any individual university, or even government for that matter, anywhere in the world. Sci-Hub represents the sum of countless different universities’ institutional access — literally a world of knowledge. This is important now more than ever in a world where even Harvard University can no longer afford to pay skyrocketing academic journal subscription fees, while Cornell axed many of its Elsevier subscriptions over a decade ago. For researchers outside the US’ and Western Europe’s richest institutions, routine piracy has long been the only way to conduct science, but increasingly the problem of unaffordable journals is coming closer to home.

… This was the experience of Elbakyan herself, who studied in Kazakhstan University and just like other students in countries where journal subscriptions are unaffordable for institutions, was forced to pirate research in order to complete her studies. Elbakyan told me, “Prices are very high, and that made it impossible to obtain papers by purchasing. You need to read many papers for research, and when each paper costs about 30 dollars, that is impossible.”

While Sci-Hub is not expected to win its case in the US, where one judge has already ordered a preliminary injunction making its former domain unavailable. (Sci-Hub moved.) Should you be sympathetic to Elsevier, you may want to take this into account (Note: Links have been removed),

Elsevier is the world’s largest academic publisher and by far the most controversial. Over 15,000 researchers have vowed to boycott the publisher for charging “exorbitantly high prices” and bundling expensive, unwanted journals with essential journals, a practice that allegedly is bankrupting university libraries. Elsevier also supports SOPA and PIPA, which the researchers claim threatens to restrict the free exchange of information. Elsevier is perhaps most notorious for delivering takedown notices to academics, demanding them to take their own research published with Elsevier off websites like Academia.edu.

The movement against Elsevier has only gathered speed over the course of the last year with the resignation of 31 editorial board members from the Elsevier journal Lingua, who left in protest to set up their own open-access journal, Glossa. Now the battleground has moved from the comparatively niche field of linguistics to the far larger field of cognitive sciences. Last month, a petition of over 1,500 cognitive science researchers called on the editors of the Elsevier journal Cognition to demand Elsevier offer “fair open access”. Elsevier currently charges researchers $2,150 per article if researchers wish their work published in Cognition to be accessible by the public, a sum far higher than the charges that led to the Lingua mutiny.

In her letter to Sweet [New York District Court Judge Robert W. Sweet], Elbakyan made a point that will likely come as a shock to many outside the academic community: Researchers and universities don’t earn a single penny from the fees charged by publishers [emphasis mine] such as Elsevier for accepting their work, while Elsevier has an annual income over a billion U.S. dollars.

As Masnick noted, much of this research is done on the public dime (i. e., funded by taxpayers). For her part, Elbakyan has written a letter defending her actions on ethical rather than legal grounds.

I recommend reading the Oxenham article as it provides details about how the site works and includes text from the letter Elbakyan wrote.  For those who don’t have much time, Masnick’s post offers a good précis.

Sci-Hub suit as a distraction from the real issues?

Getting to Dupuis’ Feb. 22, 2016 posting and his perspective on the situation,

My take? Mostly that it’s a sideshow.

One aspect that I have ranted about on Twitter which I think is worth mentioning explicitly is that I think Elsevier and all the other big publishers are actually quite happy to feed the social media rage machine with these whack-a-mole controversies. The controversies act as a sideshow, distracting from the real issues and solutions that they would prefer all of us not to think about.

By whack-a-mole controversies I mean this recurring story of some person or company or group that wants to “free” scholarly articles and then gets sued or harassed by the big publishers or their proxies to force them to shut down. This provokes wide outrage and condemnation aimed at the publishers, especially Elsevier who is reserved a special place in hell according to most advocates of openness (myself included).

In other words: Elsevier and its ilk are thrilled to be the target of all the outrage. Focusing on the whack-a-mole game distracts us from fixing the real problem: the entrenched systems of prestige, incentive and funding in academia. As long as researchers are channelled into “high impact” journals, as long as tenure committees reward publishing in closed rather than open venues, nothing will really change. Until funders get serious about mandating true open access publishing and are willing to put their money where their intentions are, nothing will change. Or at least, progress will be mostly limited to surface victories rather than systemic change.

I think Dupuis is referencing a conflict theory (I can’t remember what it’s called) which suggests that certain types of conflicts help to keep systems in place while apparently attacking those systems. His point is well made but I disagree somewhat in that I think these conflicts can also raise awareness and activate people who might otherwise ignore or mindlessly comply with those systems. So, if Elsevier and the other publishers are using these legal suits as diversionary tactics, they may find they’ve made a strategic error.

ETA April 29, 2016: Sci-Hub does seem to move around so I’ve updated the links so it can be accessed but Sci-Hub’s situation can change at any moment.

Cell Press and its first ever science writing internships

Cell Press is offering three rounds of internships. I believe the first round has ended but there are opportunities to enter the second round, from the Cell Press Newsroom webpage,

Science Writing Internship @ Cell Press

In 2016, the Press Office of Cell Press is offering its first science writing internship program. Three paid positions will be available:

+ Winter (16 weeks, Feb-May, M-F, $15/hr) for grads/post-grads
+ Summer (10 weeks, June-August, M-Th, $12/hr) for undergrads; recent college graduates are also eligible
+ Fall (16 weeks, September-December, M-F, $15/hr) for grads/post-grads

The internships willl be extremely hands-on, giving interns the full experience of being a press officer at a major publishing operation. In addition to public relations experience, interns will also be assigned journalism-type pieces to be published on Cell.com and in the print issues of Cell. Interns will also learn about the entire production process of how a scientific paper goes from the laboratory to a story in a major media outlet and have the opportunity to collaborate with other business teams, including marketing, commercial sales, editorial, and production.

Summer internship application available in March
Finalists will be asked to take a short editorial test and to provide three writing samples, and the contact information for two references.

Summer Internship
Meant for an individual who is looking to explore science communications as a career. Experience not necessary, just a proven interest in writing/public relations/science .

The undergraduate Science Writing Intern will report to Media Relations Manager Joseph Caputo and will be located in the Elsevier Cambridge, MA office. This will be a 10-week internship over the summer of 2016. The internship will be 4 days per week, Monday through Thursday, 9-5, and will be paid at an hourly rate of $12. The internship spans Monday, June 6, 2016 – Thursday, August 11, 2016.

The Internship will provide the intern with 10+ clips including press releases, news blurbs, blog posts, and original reporting. Tasks will include:

+ Responding to inquiries in the press inbox.
+ Writing press releases about research published in Cell Press journals, distributing press releases, and pitching to relevant journalists.
+ Pitching and developing Cell Press news, CrossTalk blog, podcast, and Elsevier Connect content.
+ Developing and posting Cell Press social media content.
+ Completing miscellaneous projects as assigned by Media Relations Manager.

At the end of the internship, the intern should add to their working knowledge of how to strategize, develop, and execute PR campaigns for various audience segments, write compelling PR content for the web/social media, and measure and analyze campaign outcomes.

Qualifications

The ideal candidate for the internship will:

+ Be studying for or have completed a Bachelor’s in Public Relations, Journalism, or Biology.
+ Have experience preparing and telling a story (specifically pitching, conducting interviews, and writing pieces of journalism or public relations materials).
+ Have proficiency with Microsoft Office (Word, Excel, Outlook).
+ Have work experience within an office environment.

Be comfortable working alone as well as with a team, know how to juggle many time-sensitive tasks, be able to proactively seek information to complete a project, and maintain a friendly attitude while dealing with the high number of requests received from journalists and institutions from around the world.

Internship Position and Timing

Location: Cell Press’s office at 50 Hampshire Street, Cambridge, MA. No housing or relocation assistance will be provided.
Timing: Start date June 6th, end date August 11th.
Hours/Schedule: 7 hours per day, 4 days per week, Monday through Thursday; 9 a.m. to 5 p.m.
Internship Supervisor: Joseph Caputo, Media Relations Manager
Remuneration: Paid – $12/hr – Contractor

No permanent position is available at the end of the internship, although candidates will be considered for available positions should they apply and performance/circumstances warrant it.

In case you missed it in that welter of information, an application for the second round will be available in March 2016.  I imagine you could use the following contact information although they don’t seem to be encouraging questions,

Joseph Caputo
Media Relations Manager
Phone: +1 (617) 397-2802
Cambridge, MA, USA
E-mail: press@cell.com; jcaputo@cell.com

There is no word yet as to when the third and final round will be opened up but it is intended for graduate students.

Green chemistry and zinc oxide nanoparticles from Iran (plus some unhappy scoop about Elsevier and access)

It’s been a while since I’ve featured any research from Iran partly due to the fact that I find the information disappointingly scant. While the Dec. 22, 2013 news item on Nanowerk doesn’t provide quite as much detail as I’d like it does shine a light on an aspect of Iranian nanotechnology research that I haven’t previously encountered, green chemistry (Note: A link has been removed),

Researchers used a simple and eco-friendly method to produce homogenous zinc oxide (ZnO) nanoparticles with various applications in medical industries due to their photocatalytic and antibacterial properties (“Sol–gel synthesis, characterization, and neurotoxicity effect of zinc oxide nanoparticles using gum tragacanth”).

Zinc oxide nanoparticles have numerous applications, among which mention can be made of photocatalytic issues, piezoelectric devices, synthesis of pigments, chemical sensors, drug carriers in targeted drug delivery, and the production of cosmetics such as sunscreen lotions.

The Dec. 22, 2013 Iran Nanotechnology Initiative Council (INIC) news release, which originated the news item, provides a bit more detail (Note: Links have been removed),

By using natural materials found in the geography of Iran and through sol-gel technique, the researchers synthesized zinc oxide nanoparticles in various sizes. To this end, they used zinc nitrate hexahydrate and gum tragacanth obtained from the Northern parts of Khorassan Razavi Province as the zinc-providing source and the agent to control the size of particles in aqueous solution, respectively.

Among the most important characteristics of the synthesis method, mention can be made of its simplicity, the use of cost-effective materials, conservation of green chemistry principals to prevent the use of hazardous materials to human safety and environment, production of nanoparticles in homogeneous size and with high efficiency, and most important of all, the use of native materials that are only found in Iran and its introduction to the world.

Here’s a link to and a citation for the paper,

Sol–gel synthesis, characterization, and neurotoxicity effect of zinc oxide nanoparticles using gum tragacanth by Majid Darroudi, Zahra Sabouri, Reza Kazemi Oskuee, Ali Khorsand Zak, Hadi Kargar, and Mohamad Hasnul Naim Abd Hamidf. Ceramics International, Volume 39, Issue 8, December 2013, Pages 9195–9199

There’s a bit more technical information in the paper’s abstract,

The use of plant extract in the synthesis of nanomaterials can be a cost effective and eco-friendly approach. In this work we report the “green” and biosynthesis of zinc oxide nanoparticles (ZnO-NPs) using gum tragacanth. Spherical ZnO-NPs were synthesized at different calcination temperatures. Transmission electron microscopy (TEM) imaging showed the formation most of nanoparticles in the size range of below 50 nm. The powder X-ray diffraction (PXRD) analysis revealed wurtzite hexagonal ZnO with preferential orientation in (101) reflection plane. In vitro cytotoxicity studies on neuro2A cells showed a dose dependent toxicity with non-toxic effect of concentration below 2 µg/mL. The synthesized ZnO-NPs using gum tragacanth were found to be comparable to those obtained from conventional reduction methods using hazardous polymers or surfactants and this method can be an excellent alternative for the synthesis of ZnO-NPs using biomaterials.

I was not able to find the DOI (digital object identifier) and this paper is behind a paywall.

Elsevier and access

On a final note, Elsevier, the company that publishes Ceramics International and many other journals, is arousing some ire with what appears to be its latest policies concerning access according to a Dec. 20, 2013 posting by Mike Masnick for Techdirt Note: Links have been removed),

We just recently wrote about the terrible anti-science/anti-knowledge/anti-learning decision by publishing giant Elsevier to demand that Academia.edu take down copies of journal articles that were submitted directly by the authors, as Elsevier wished to lock all that knowledge (much of it taxpayer funded) in its ridiculously expensive journals. Mike Taylor now alerts us that Elsevier is actually going even further in its war on access to knowledge. Some might argue that Elsevier was okay in going after a “central repository” like Academia.edu, but at least it wasn’t going directly after academics who were posting pdfs of their own research on their own websites. While some more enlightened publishers explicitly allow this, many (including Elsevier) technically do not allow it, but have always looked the other way when authors post their own papers.

That’s now changed. As Taylor highlights, the University of Calgary sent a letter to its staff saying that a company “representing” Elsevier, was demanding that they take down all such articles on the University’s network.

While I do feature the topic of open access and other issues with intellectual property from time to time, you’ll find Masnick’s insights and those of his colleagues are those of people who are more intimately familiar (albeit firmly committed to open access) with the issues should you choose to read his Dec. 20, 2013 posting in its entirely.

Opening up Open Access: European Union, UK, Argentina, US, and Vancouver (Canada)

There is a furor growing internationally and it’s all about open access. It ranges from a petition in the US to a comprehensive ‘open access’ project from the European Union to a decision in the Argentinian Legislature to a speech from David Willetts, UK Minister of State for Universities and Science to an upcoming meeting in June 2012 being held in Vancouver (Canada).

As this goes forward, I’ll try to be clear as to which kind of open access I’m discussing,  open access publication (access to published research papers), open access data (access to research data), and/or both.

The European Commission has adopted a comprehensive approach to giving easy, open access to research funded through the European Union under the auspices of the current 7th Framework Programme and the upcoming Horizon 2020 (or what would have been called the 8th Framework Pr0gramme under the old system), according to the May 9, 2012 news item on Nanowerk,

To make it easier for EU-funded projects to make their findings public and more readily accessible, the Commission is funding, through FP7, the project ‘Open access infrastructure for research in Europe’ ( OpenAIRE). This ambitious project will provide a single access point to all the open access publications produced by FP7 projects during the course of the Seventh Framework Programme.

OpenAIRE is a repository network and is based on a technology developed in an earlier project called Driver. The Driver engine trawled through existing open access repositories of universities, research institutions and a growing number of open access publishers. It would index all these publications and provide a single point of entry for individuals, businesses or other scientists to search a comprehensive collection of open access resources. Today Driver boasts an impressive catalogue of almost six million taken from 327 open access repositories from across Europe and beyond.

OpenAIRE uses the same underlying technology to index FP7 publications and results. FP7 project participants are encouraged to publish their papers, reports and conference presentations to their institutional open access repositories. The OpenAIRE engine constantly trawls these repositories to identify and index any publications related to FP7-funded projects. Working closely with the European Commission’s own databases, OpenAIRE matches publications to their respective FP7 grants and projects providing a seamless link between these previously separate data sets.

OpenAIRE is also linked to CERN’s open access repository for ‘orphan’ publications. Any FP7 participants that do not have access to an own institutional repository can still submit open access publications by placing them in the CERN repository.

Here’s why I described this project as comprehensive, from the May 9, 2012 news item,

‘OpenAIRE is not just about developing new technologies,’ notes Ms Manola [Natalia Manola, the project’s manager], ‘because a significant part of the project focuses on promoting open access in the FP7 community. We are committed to promotional and policy-related activities, advocating open access publishing so projects can fully contribute to Europe’s knowledge infrastructure.’

The project is collecting usage statistics of the portal and the volume of open access publications. It will provide this information to the Commission and use this data to inform European policy in this domain.

OpenAIRE is working closely to integrate its information with the CORDA database, the master database of all EU-funded research projects. Soon it should be possible to click on a project in CORDIS (the EU’s portal for research funding), for example, and access all the open access papers published by that project. Project websites will also be able to provide links to the project’s peer reviewed publications and make dissemination of papers virtually effortless.

The project participants are also working with EU Members to develop a European-wide ‘open access helpdesk’ which will answer researchers’ questions about open access publishing and coordinate the open access initiatives currently taking place in different countries. The helpdesk will build up relationships and identify additional open access repositories to add to the OpenAIRE network.

Meanwhile, there’s been a discussion on the UK’s Guardian newspaper website about an ‘open access’ issue, money,  in a May 9, 2012 posting by John Bynner,

The present academic publishing system obstructs the free communication of research findings. By erecting paywalls, commercial publishers prevent scientists from downloading research papers unless they pay substantial fees. Libraries similarly pay huge amounts (up to £1m or more per annum) to give their readers access to online journals.

There is general agreement that free and open access to scientific knowledge is desirable. The way this might be achieved has come to the fore in recent debates about the future of scientific and scholarly journals.

Our concern lies with the major proposed alternative to the current system. Under this arrangement, authors are expected to pay when they submit papers for publication in online journals: the so called “article processing cost” (APC). The fee can amount to anything between £1,000 and £2,000 per article, depending on the reputation of the journal. Although the fees may sometimes be waived, eligibility for exemption is decided by the publisher and such concessions have no permanent status and can always be withdrawn or modified.

A major problem with the APC model is that it effectively shifts the costs of academic publishing from the reader to the author and therefore discriminates against those without access to the funds needed to meet these costs. [emphasis mine] Among those excluded are academics in, for example, the humanities and the social sciences whose research funding typically does not include publication charges, and independent researchers whose only means of paying the APC is from their own pockets. Academics in developing countries in particular face discrimination under APC because of their often very limited access to research funds.

There is another approach that could be implemented for a fraction of the cost of commercial publishers’ current journal subscriptions. “Access for all” (AFA) journals, which charge neither author nor reader, are committed to meeting publishing costs in other ways.

Bynner offers a practical solution, get the libraries to pay their subscription fees to an AFA journal, thereby funding ‘access for all’.

The open access discussion in the UK hasn’t stopped with a few posts in the Guardian, there’s also support from the government. David Willetts, in a May 2, 2012 speech to the UK Publishers Association Annual General Meeting had this to say, from the UK’s Dept. for Business Innovation and Skills website,

I realise this move to open access presents a challenge and opportunity for your industry, as you have historically received funding by charging for access to a publication. Nevertheless that funding model is surely going to have to change even beyond the positive transition to open access and hybrid journals that’s already underway. To try to preserve the old model is the wrong battle to fight. Look at how the music industry lost out by trying to criminalise a generation of young people for file sharing. [emphasis mine] It was companies outside the music business such as Spotify and Apple, with iTunes, that worked out a viable business model for access to music over the web. None of us want to see that fate overtake the publishing industry.

Wider access is the way forward. I understand the publishing industry is currently considering offering free public access to scholarly journals at all UK public libraries. This is a very useful way of extending access: it would be good for our libraries too, and I welcome it.

It would be deeply irresponsible to get rid of one business model and not put anything in its place. That is why I hosted a roundtable at BIS in March last year when all the key players discussed these issues. There was a genuine willingness to work together. As a result I commissioned Dame Janet Finch to chair an independent group of experts to investigate the issues and report back. We are grateful to the Publishers Association for playing a constructive role in her exercise, and we look forward to receiving her report in the next few weeks. No decisions will be taken until we have had the opportunity to consider it. But perhaps today I can share with you some provisional thoughts about where we are heading.

The crucial options are, as you know, called green and gold. Green means publishers are required to make research openly accessible within an agreed embargo period. This prompts a simple question: if an author’s manuscript is publicly available immediately, why should any library pay for a subscription to the version of record of any publisher’s journal? If you do not believe there is any added value in academic publishing you may view this with equanimity. But I believe that academic publishing does add value. So, in determining the embargo period, it’s necessary to strike a suitable balance between enabling revenue generation for publishers via subscriptions and providing public access to publicly funded information. In contrast, gold means that research funding includes the costs of immediate open publication, thereby allowing for full and immediate open access while still providing revenue to publishers.

In a May 22, 2012 posting at the Guardian website, Mike Taylor offers some astonishing figures (I had no idea academic publishing has been quite so lucrative) and notes that the funders have been a driving force in this ‘open access’ movement (Note: I have removed links from the excerpt),

The situation again, in short: governments and charities fund research; academics do the work, write and illustrate the papers, peer-review and edit each others’ manuscripts; then they sign copyright over to profiteering corporations who put it behind paywalls and sell research back to the public who funded it and the researchers who created it. In doing so, these corporations make grotesque profits of 32%-42% of revenue – far more than, say, Apple’s 24% or Penguin Books’ 10%. [emphasis mine]

… But what makes this story different from hundreds of other cases of commercial exploitation is that it seems to be headed for a happy ending. That’s taken some of us by surprise, because we thought the publishers held all the cards. Academics tend to be conservative, and often favour publishing their work in established paywalled journals rather than newer open access venues.

The missing factor in this equation is the funders. Governments and charitable trusts that pay academics to carry out research naturally want the results to have the greatest possible effect. That means publishing those results openly, free for anyone to use.

Taylor also goes on to mention the ongoing ‘open access’ petition in the US,

There is a feeling that the [US] administration fully understands the value of open access, and that a strong demonstration of public concern could be all it takes now to goad it into action before the November election. To that end a Whitehouse.gov petition has been set up urging Obama to “act now to implement open access policies for all federal agencies that fund scientific research”. Such policies would bring the US in line with the UK and Europe.

The people behind the US campaign have produced a video,

Anyone wondering about the reference to Elsevier may want to check out Thomas Lin’s Feb. 13, 2012 article for the New York Times,

More than 5,700 researchers have joined a boycott of Elsevier, a leading publisher of science journals, in a growing furor over open access to the fruits of scientific research.

You can find out more about the boycott and the White House petition at the Cost of Knowledge website.

Meanwhile, Canadians are being encouraged to sign the petition (by June 19, 2012), according to the folks over at ScienceOnline Vancouver in a description o f their June 12, 2012 event, Naked Science; Excuse: me your science is showing (a cheap, cheesy, and attention-getting  title—why didn’t I think of it first?),

Exposed. Transparent. Nude. All adjectives that should describe access to scientific journal articles, but currently, that’s not the case. The research paid by our Canadian taxpayer dollars is locked behind doors. The only way to access these articles is money, and lots of it!

Right now research articles costs more than a book! About $30. Only people with university affiliations have access and only journals their libraries subscribe to. Moms, dads, sisters, brothers, journalists, students, scientists, all pay for research, yet they can’t read the articles about their research without paying for it again. Now that doesn’t make sense.

….

There is also petition going around that states that research paid for by US taxpayer dollars should be available for free to US taxpayers (and others!) on the internet. Don’t worry if you are Canadian citizen, by signing this petition, Canadians would get access to the US research too and it would help convince the Canadian government to adopt similar rules. [emphasis mine]

Here’s where you can go to sign the petition. As for the notion that this will encourage the Canadian government to adopt an open access philosophy, I do not know. On the one hand, the government has opened up access to data, notably Statistics Canada data, mentioned by Frances Woolley in her March 22, 2012 posting about that and other open access data initiatives by the Canadian government on the Globe and Mail blog,

The federal government is taking steps to build the country’s data infrastructure. Last year saw the launch of the open data pilot project, data.gc.ca. Earlier this year the paywall in front of Statistics Canada’s enormous CANSIM database was taken down. The National Research Council, together with University of Guelph and Carleton University, has a new data registration service, DataCite, which allows Canadian researches to give their data permanent names in the form of digital object identifiers. In the long run, these projects should, as the press releases claim, “support innovation”, “add value-for-money for Canadians,” and promote “the reuse of existing data in commercial applications.”

That seems promising but there is a countervailing force. The Canadian government has also begun to charge subscription fees for journals that were formerly free. From the March 8, 2011 posting by Emily Chung on the CBC’s (Canadian Broadcasting Corporation) Quirks and Quarks blog,

The public has lost free online access to more than a dozen Canadian science journals as a result of the privatization of the National Research Council’s government-owned publishing arm.

Scientists, businesses, consultants, political aides and other people who want to read about new scientific discoveries in the 17 journals published by National Research Council Research Press now either have to pay $10 per article or get access through an institution that has an annual subscription.

It caused no great concern at the time,

Victoria Arbour, a University of Alberta graduate student, published her research in the Canadian Journal of Earth Sciences, one of the Canadian Science Publishing journals, both before and after it was privatized. She said it “definitely is too bad” that her new articles won’t be available to Canadians free online.

“It would have been really nice,” she said. But she said most journals aren’t open access, and the quality of the journal is a bigger concern than open access when choosing where to publish.

Then, there’s this from the new publisher, Canadian Science Publishing,

Cameron Macdonald, executive director of Canadian Science Publishing, said the impact of the change in access is “very little” on the average scientist across Canada because subscriptions have been purchased by many universities, federal science departments and scientific societies.

“I think the vast majority of researchers weren’t all that concerned,” he said. “So long as the journals continued with the same mission and mandate, they were fine with that.”

Macdonald said the journals were never strictly open access, as online access was free only inside Canadian borders and only since 2002.

So, journals that offered open access to research funded by Canadian taxpapers (to Canadians only) are now behind paywalls. Chung’s posting notes the problem already mentioned in the UK Guardian postings, money,

“It’s pretty prohibitively expensive to make things open access, I find,” she {Victoria Arbour] said.

Weir [Leslie Weir, chief librarian at the University of Ottawa] said more and more open-access journals need to impose author fees to stay afloat nowadays.

Meanwhile, the cost of electronic subscriptions to research journals has been ballooning as library budgets remain frozen, she said.

So far, no one has come up with a solution to the problem. [emphasis mine]

It seems they have designed a solution in the UK, as noted in John Bynner’s posting; perhaps we could try it out here.

Before I finish up, I should get to the situation in Argentina, from the May 27, 2012 posting on the Pasco Phronesis (David Bruggeman) blog (Note: I have removed a link in the following),

The lower house of the Argentinian legislature has approved a bill (en Español) that would require research results funded by the government be placed in institutional repositories once published.  There would be exceptions for studies involving confidential information and the law is not intended to undercut intellectual property or patent rights connected to research.  Additionally, primary research data must be published within 5 years of their collection.  This last point would, as far as I can tell, would be new ground for national open access policies, depending on how quickly the U.S. and U.K. may act on this issue.

Argentina steals a march on everyone by offering open access publication and open access data, within certain, reasonable constraints.

Getting back to David’s May 27, 2012 posting, he offers also some information on the European Union situation and some thoughts  on science policy in Egypt.

I have long been interested in open access publication as I feel it’s infuriating to be denied access to research that one has paid for in tax dollars. I have written on the topic before in my Beethoven inspires Open Research (Nov. 18, 2011 posting) and Princeton goes Open Access; arXiv is 10 years old (Sept. 30, 2011 posting) and elsewhere.

ETA May 28, 2012: I found this NRC Research Press website for the NRC journals and it states,

We are pleased to announce that Canadians can enjoy free access to over 100 000 back files of NRC Research Press journals, dating back to 1951. Access to material in these journals published after December 31, 2010, is available to Canadians through subscribing universities across Canada as well as the major federal science departments.

Concerned readers and authors whose institutes have not subscribed for the 2012 volume year can speak to their university librarians or can contact us to subscribe directly.

It’s good to see Canadians still have some access, although personally, I do prefer to read recent research.

ETA May 29, 2012: Yikes, I think this is one of the longest posts ever and I’m going to add this info. about libre redistribution and data mining as they relate to open access in this attempt to cover the topic as fully as possible in one posting.

First here’s an excerpt  from  Ross Mounce’s May 28, 2012 posting on the Palaeophylophenomics blog about ‘Libre redistribution’ (Note: I have removed a link),

I predict that the rights to electronically redistribute, and machine-read research will be vital for 21st century research – yet currently we academics often wittingly or otherwise relinquish these rights to publishers. This has got to stop. The world is networked, thus scholarly literature should move with the times and be openly networked too.

To better understand the notion of ‘libre redistribution’ you’ll want to read more of Mounce’s comments but you might also  want to check out Cameron Neylon’s comments in his March 6, 2012 posting on the Science in the Open blog,

Centralised control, failure to appreciate scale, and failure to understand the necessity of distribution and distributed systems. I have with me a device capable of holding the text of perhaps 100,000 papers It also has the processor power to mine that text. It is my phone. In 2-3 years our phones, hell our watches, will have the capacity to not only hold the world’s literature but also to mine it, in context for what I want right now. Is Bob Campbell ready for every researcher, indeed every interested person in the world, to come into his office and discuss an agreement for text mining? Because the mining I want to do and the mining that Peter Murray-Rust wants to do will be different, and what I will want to do tomorrow is different to what I want to do today. This kind of personalised mining is going to be the accepted norm of handling information online very soon and will be at the very centre of how we discover the information we need.

This moves the discussion past access (taxpayers not seeing the research they’ve funded, researchers who don’t have subscriptions, libraries not have subscriptions, etc.)  to what happens when you can get access freely. It opens up new ways of doing research by means of text mining and data mining redistribution of them both.

Cell biology journal conceptualizes science papers’ content with multimedia for a combined print and online experience

Strictly speaking this isn’t visualizing data and scientific information (which I’ve mentioned before)  so much as it is augmenting it. The biology journal Cell  is now including online multimedia components that can be accessed only by a QR code in the journal’s  hardcopy version. From the May 26, 2011 news item on physorg.com,

On May 27th the top cell biology journal, Cell, will publish its latest issue with multimedia components directly attached to the print version. The issue uses QR code technology to connect readers to the journal’s multimedia formats online thereby improving the conceptualization of a paper’s scientific content and enhancing the reader’s overall experience.

Readers of the hardcopy issue who take advantage of the code will experience an author-narrated walk through a paper’s figures. In all, the issue will use QR codes to include seventeen “hidden treasures” for readers to discover. Readers can simply scan the QR codes with a smart phone or tablet to uncover animated figures, interviews, videos, and more. The multimedia formats offered by Cell include: Podcasts, Paperclips, PaperFlicks, and Enhanced Snapshots. Even the journal’s cover shows a simple QR code which allows readers of the hardcopy issue to see an animated cover.

Here’s the animated cover, which is titled, Malaria Channels Host Nutrients,

I find this development interesting in light of moves to provide information via graphical abstracts and/or video abstracts. For example, the publisher Elsevier offers authors of papers for their various science journals instructions on preparing graphical abstracts (from Elsevier’s authors’ graphical abstracts webpage),

A Graphical Abstract should allow readers to quickly gain an understanding of the main take-home message of the paper and is intended to encourage browsing, promote interdisciplinary scholarship, and help readers identify more quickly which papers are most relevant to their research interests.

Authors must provide an image that clearly represents the work described in the paper. A key figure from the original paper, summarising the content can also be submitted as a graphical abstract.

Elsevier provides examples of good graphical abstracts such as this one,

Journal of Controlled Release, Volume 140, Issue 3, 16 December 2009, Pages 210-217. Hydrotropic oligomer-conjugated glycol chitosan as a carrier of paclitaxel: Synthesis, characterization, and in vivo biodistribution. G. Saravanakumar, Kyung Hyun Min, et.al., doi:10.1016/j.jconrel.2009.06.015

For an example of a video abstract, I’m going back to Cell which offers this one from Hebrew University of Jerusalem researchers discussing their work on octopus arm movements and visual control,

http://www.youtube.com/user/cellvideoabstracts?blend=21&ob=5

I have a suspicion that the trend to presenting science to the general public and other experts using graphical and video abstracts and other primarily ‘visual’ media could  have quite an impact on the sciences and how they are practiced. I haven’t quite figured out what any of those impacts might be but if someone would like to  comment on that, I’d be more than happy to hear from you.

Meanwhile, it seems to be a Cell kind of day so I’ve decided to embed the Lady Gaga Bad Project parody by the Hui Zheng Laboratory at Baylor Medical College in Texas for a second time,

Happy Weekend!

Elsevier and Google; scientific publishing

Due to my interest in communication,  I have from time to time commented or drawn attention to developments in publishing (scientific and otherwise) and ebooks. Earlier this month, Google announced the launch of its ebook store and now Elsevier, a major publisher of scientific, technical, and medical information, has announced that it will be using Google’s ebook store as a new distribution channel. From the Dec. 10, 2010 news item on Nanowerk,

Elsevier, the world-leading publisher of scientific, technical and medical information products and services, announced today that it is participating in the recently launched Google eBooks store by including a large selection of Elsevier’s eBook titles. Elsevier regards Google eBooks as a valuable new distribution channel to increase reach and accessibility of its scientific and professional ebook content in the United States.

“Selling a substantial part of our Science & Technology ebooks through Google eBooks will significantly add to the reach and accessibility of our content,” said Suzanne BeDell, Managing Director of Science & Technology Books at Elsevier. “The platform contains one of the largest ebook collections in the world and is compatible with a wide range of devices such as laptops, smartphones, e-readers and tablets. We are therefore confident that our partnership with Google will prove an important step in reaching our objective to provide universal access to scientific content.”

Presumably ‘adding accessibility’ as BeDell puts it means that the books will be significantly cheaper. (I still remember the shock I experienced at discovering the costs of academic texts. Years later, I am still recovering.)

I’m not sure that buyers will own the ebooks. It is possible for an ebook to be removed without notice if you buy from Amazon as I noted in my Sept. 10, 2010 posting, part 2 of a 3 part series on e-readers.)

If you’re interested in the Google part of the story, here’s an article by E. B. Boyd for Fast Company,

If you stroll on over to your corner bookstore this week and ask the person behind the counter about Google’s new ebookstore, which launches today, you probably won’t be greeted with the kind of teeth-gnashing that has accompanied other digital developments, like Amazon’s online bookstore or the advent of proprietary e-readers. Instead, you might actually be greeted with some excitement and delight. That’s because Google is taking a different approach to selling e-books than Amazon or Barnes & Noble. Rather than create a closed system that leaves others out in the cold, Google is actually partnering with independent bookstores to sell its wares–and share the profits.

There are a few reasons Google is going a different way. The ebookstore emerged from the Google Books program, which didn’t start out as a potential revenue stream. Instead, the company’s book-scanning project was simply a program to help the company fulfill its mission to make all of the world’s information accessible. Since so much information is contained in books, the company wanted to make sure that if you were using Google Search to look for a particular topic, it would be able to point you to books containing information about that topic, in addition to relevant web pages. Then, as Google Books began partnering with publishers and contemplating a program to sell books in addition to just making them searchable, it made a philosophical decision that brick-and-mortar bookstores are critical to the literary ecosystem. “A huge amount of books are bought because people go into a physical bookstore and say, ‘Hey, I want this, I want that,’” Google Books engineering director Dan Clancy told an audience at the Computer History Museum last year.

Here’s a response from some of the bookstore owners (from the article),

Bookstores seem to be cautiously optimistic about the Google program. A person who answered the phone at St. Mark’s Bookshop in New York said, “We’re looking forward to it,” before referring Fast Company to the ABA. “We’re really pleased,” said Mark LaFramboise, a buyer at Washington D.C.’s Politics and Prose. “We’ve been waiting for this for a long time.”

Darin Sennett, director of strategic partnerships at the famous Powell’s book shop in Portland, Oregon, is particularly excited about Google’s technological model. The Kindle, the Nook, and the Sony eReader all use the traditional approach to e-books: They sell DRM-protected files that customers download to devices and which must be read with specific e-reading software. Google, however, is using the cloud. Its e-books will be stored on Google servers, and readers who’ve purchased them will access their books via a browser. [emphasis mine] Unlike in the Kindle system, where Kindle e-books can only be read on Kindle devices, Google e-books will be able to be read on any device that has a browser. Until now, independent bookstores have been effectively shut out of devices like the iPad and smartphones (which are emerging as many customers’ reading platforms of choice) because the e-books available from other distributors were either not compatible with those devices or the formatting was so clunky as to make them effectively unreadable.

Certainly, this sounds a lot better from the bookseller’s and reader’s perspectives. I’m glad to see that people at one of my favourite bookstores (Powell’s) is so enthusiastic but I do note that the books are stored on Google’s servers, which means they can be removed or even altered quite easily. On the plus side, the books can be downloaded in either PDF or ePub format. All in all, bravo!

Visualizing innovation and the ACS’s second nanotube contest

I’ve found more material on visualizing data, this time the data is about innovation. An article by Cliff Kuang in Fast Company comments on the WAINOVA (World Alliance for Innovation) and its interactive atlas of innovation. From the article,

Bestario, a Spanish infographics firm, designs Web sites that attempt to find new relationships in a teeming mass of data. Sometimes, the results are interesting, as examples, if nothing else, of data porn; other times, it’s merely confounding. Its new project is a great deal easier to explain: The Wainova World Atlas of Innovation attempts to map the world’s major science and business incubators, as well as the professional associations linking them.

Kuang goes on to point out some of the difficulties associated with visualizing data when you get beyond using bar graphs and pie charts. The atlas can be found here on the WAINOVA site. If you’re interested in looking at more data visualization projects, you can check out the infosthetics site mentioned in Kuang’s article.

Rob Annan at the Don’t leave Canada behind blog has picked up on an article in the Financial Post which, based on an American Express survey, states that Canadian business is being very innovative despite the economic downturn. You can read Annan’s comments and get a link to the Financial Post article here. As for my take on it all, I concede that it takes nerve to keep investing in your business when everything is so uncertain but I agree with Annan (if I may take the liberty of rephrasing his comment slightly) there’s no real innovation in the examples  given in the Financial Post article.

The American Chemical Society (ACS) has announced its second nano video contest. From the announcement on Azonano,

In our last video contest “What is Nano?”, you showed us that nano is a way of making things smaller, lighter and more efficient, making it possible to build better machines, solar cells, materials and radios. But another question remains: how exactly is “nano” going to impact both us and the world? We want you to think big about nano and show us how nano will address the challenges we face today.

The contest is being run by ACS Nanotation NanoTube. There’s a cash prize of $500USD and submissions must be made between July 6, 2009 and August 9, 2009.  (Sorry, I kept forgetting to put this up.) You must be a registered user to make a submission but registration is free here. The Nano Song (complete with puppets!) that was making the rounds a few months ago was a video submission for the first contest.

Elsevier has announced a new project, the Article of the Future. The beta site is here. From the announcement on Nanowerk News,

Elsevier, a leading publisher of scientific, technical and medical information products and services, today announces the ‘Article of the Future’ project, an ongoing collaboration with the scientific community to redefine how a scientific article is presented online. The project takes full advantage of online capabilities, allowing readers individualized entry points and routes through content, while exploiting the latest advances in visualization techniques.

Yes, it’s back to visualization and, eventually, multimodal discourse analysis and one of the big questions (for me) how is all this visualizing of data going to affect our knowledge? More tomorrow.