Tag Archives: open access

Green chemistry and zinc oxide nanoparticles from Iran (plus some unhappy scoop about Elsevier and access)

It’s been a while since I’ve featured any research from Iran partly due to the fact that I find the information disappointingly scant. While the Dec. 22, 2013 news item on Nanowerk doesn’t provide quite as much detail as I’d like it does shine a light on an aspect of Iranian nanotechnology research that I haven’t previously encountered, green chemistry (Note: A link has been removed),

Researchers used a simple and eco-friendly method to produce homogenous zinc oxide (ZnO) nanoparticles with various applications in medical industries due to their photocatalytic and antibacterial properties (“Sol–gel synthesis, characterization, and neurotoxicity effect of zinc oxide nanoparticles using gum tragacanth”).

Zinc oxide nanoparticles have numerous applications, among which mention can be made of photocatalytic issues, piezoelectric devices, synthesis of pigments, chemical sensors, drug carriers in targeted drug delivery, and the production of cosmetics such as sunscreen lotions.

The Dec. 22, 2013 Iran Nanotechnology Initiative Council (INIC) news release, which originated the news item, provides a bit more detail (Note: Links have been removed),

By using natural materials found in the geography of Iran and through sol-gel technique, the researchers synthesized zinc oxide nanoparticles in various sizes. To this end, they used zinc nitrate hexahydrate and gum tragacanth obtained from the Northern parts of Khorassan Razavi Province as the zinc-providing source and the agent to control the size of particles in aqueous solution, respectively.

Among the most important characteristics of the synthesis method, mention can be made of its simplicity, the use of cost-effective materials, conservation of green chemistry principals to prevent the use of hazardous materials to human safety and environment, production of nanoparticles in homogeneous size and with high efficiency, and most important of all, the use of native materials that are only found in Iran and its introduction to the world.

Here’s a link to and a citation for the paper,

Sol–gel synthesis, characterization, and neurotoxicity effect of zinc oxide nanoparticles using gum tragacanth by Majid Darroudi, Zahra Sabouri, Reza Kazemi Oskuee, Ali Khorsand Zak, Hadi Kargar, and Mohamad Hasnul Naim Abd Hamidf. Ceramics International, Volume 39, Issue 8, December 2013, Pages 9195–9199

There’s a bit more technical information in the paper’s abstract,

The use of plant extract in the synthesis of nanomaterials can be a cost effective and eco-friendly approach. In this work we report the “green” and biosynthesis of zinc oxide nanoparticles (ZnO-NPs) using gum tragacanth. Spherical ZnO-NPs were synthesized at different calcination temperatures. Transmission electron microscopy (TEM) imaging showed the formation most of nanoparticles in the size range of below 50 nm. The powder X-ray diffraction (PXRD) analysis revealed wurtzite hexagonal ZnO with preferential orientation in (101) reflection plane. In vitro cytotoxicity studies on neuro2A cells showed a dose dependent toxicity with non-toxic effect of concentration below 2 µg/mL. The synthesized ZnO-NPs using gum tragacanth were found to be comparable to those obtained from conventional reduction methods using hazardous polymers or surfactants and this method can be an excellent alternative for the synthesis of ZnO-NPs using biomaterials.

I was not able to find the DOI (digital object identifier) and this paper is behind a paywall.

Elsevier and access

On a final note, Elsevier, the company that publishes Ceramics International and many other journals, is arousing some ire with what appears to be its latest policies concerning access according to a Dec. 20, 2013 posting by Mike Masnick for Techdirt Note: Links have been removed),

We just recently wrote about the terrible anti-science/anti-knowledge/anti-learning decision by publishing giant Elsevier to demand that Academia.edu take down copies of journal articles that were submitted directly by the authors, as Elsevier wished to lock all that knowledge (much of it taxpayer funded) in its ridiculously expensive journals. Mike Taylor now alerts us that Elsevier is actually going even further in its war on access to knowledge. Some might argue that Elsevier was okay in going after a “central repository” like Academia.edu, but at least it wasn’t going directly after academics who were posting pdfs of their own research on their own websites. While some more enlightened publishers explicitly allow this, many (including Elsevier) technically do not allow it, but have always looked the other way when authors post their own papers.

That’s now changed. As Taylor highlights, the University of Calgary sent a letter to its staff saying that a company “representing” Elsevier, was demanding that they take down all such articles on the University’s network.

While I do feature the topic of open access and other issues with intellectual property from time to time, you’ll find Masnick’s insights and those of his colleagues are those of people who are more intimately familiar (albeit firmly committed to open access) with the issues should you choose to read his Dec. 20, 2013 posting in its entirely.

Memories, science, archiving, and authenticity

This is going to be one of my more freewheeling excursions into archiving and memory. I’ll be starting with  a movement afoot in the US government to give citizens open access to science research moving onto a network dedicated to archiving nanoscience- and nanotechnology-oriented information, examining the notion of authenticity in regard to the Tiananmen Square incident on June 4, 1989, and finishing with the Council of Canadian Academies’ Expert Panel on Memory Institutions and the Digital Revolution.

In his June 4, 2013 posting on the Pasco Phronesis blog, David Bruggeman features information and an overview of  the US Office of Science and Technology Policy’s efforts to introduce open access to science research for citizens (Note: Links have been removed),

Back in February, the Office of Science and Technology Policy (OSTP) issued a memorandum to federal science agencies on public access for research results.  Federal agencies with over $100 million in research funding have until August 22 to submit their access plans to OSTP.  This access includes research publications, metadata on those publications, and underlying research data (in a digital format).

A collection of academic publishers, including the Association of American Publishers and the organization formerly known as the American Association for the Advancement of Science (publisher of Science), has offered a proposal for a publishing industry repository for pubic access to federally funded research that they publish.

David provides a somewhat caustic perspective on the publishers’ proposal while Jocelyn Kaiser’s June 4, 2013 article for ScienceInsider details the proposal in more detail (Note: Links have been removed),

Organized in part by the Association of American Publishers (AAP), which represents many commercial and nonprofit journals, the group calls its project the Clearinghouse for the Open Research of the United States (CHORUS). In a fact sheet that AAP gave to reporters, the publishers describe CHORUS as a “framework” that would “provide a full solution for agencies to comply with the OSTP memo.”

As a starting point, the publishers have begun to index papers by the federal grant numbers that supported the work. That index, called FundRef, debuted in beta form last week. You can search by agency and get a list of papers linked to the journal’s own websites through digital object identifiers (DOIs), widely used ID codes for individual papers. The pilot project involved just a few agencies and publishers, but many more will soon join FundRef, says Fred Dylla, executive director of the American Institute of Physics. (AAAS, which publishes ScienceInsider, is among them and has also signed on to CHORUS.)

The next step is to make the full-text papers freely available after agencies decide on embargo dates, Dylla says. (The OSTP memo suggests 12 months but says that this may need to be adjusted for some fields and journals.) Eventually, the full CHORUS project will also allow searches of the full-text articles. “We will make the corpus available for anybody’s search tool,” says Dylla, who adds that search agreements will be similar to those that publishers already have with Google Scholar and Microsoft Academic Search.

I couldn’t find any mention in Kaiser’s article as to how long the materials would be available. Is this supposed to be an archive, as well as, a repository? Regardless, I found the beta project, FundRef, a little confusing. The link from the ScienceInsider article takes you to this May 28, 2013 news release,

FundRef, the funder identification service from CrossRef [crossref.org], is now available for publishers to contribute funding data and for retrieval of that information. FundRef is the result of collaboration between funding agencies and publishers that correlates grants and other funding with the scholarly output of that support.

Publishers participating in FundRef add funding data to the bibliographic metadata they already provide to CrossRef for reference linking. FundRef data includes the name of the funder and a grant or award number. Manuscript tracking systems can incorporate a taxonomy of 4000 global funder names, which includes alternate names, aliases, and abbreviations enabling authors to choose from a standard list of funding names. Then the tagged funding data will travel through publishers’ production systems to be stored at CrossRef.

I was hoping that clicking on the FundRef button would take me to a database that I could test or tour. At this point, I wouldn’t have described the project as being at the beta stage (from a user’s perspective) as they are still building it and gathering data. However, there is lots of information on the FundRef webpage including an Additional Resources section featuring a webinar,

Attend an Introduction to FundRef Webinar – Thursday, June 6, 2013 at 11:00 am EDT

You do need to sign up for the webinar. Happily, it is open to international participants, as well as, US participants.

Getting back to my question on whether or not this effort is also an archive of sorts, there is a project closer to home (nanotechnologywise, anyway) that touches on these issues from an unexpected perspective, from the Nanoscience and Emerging Technologies in Society (NETS); sharing research and learning tools About webpage,

The Nanoscience and Emerging Technologies in Society: Sharing Research and Learning Tools (NETS) is an IMLS-funded [Institute of Museum and Library Services] project to investigate the development of a disciplinary repository for the Ethical, Legal and Social Implications (ELSI) of nanoscience and emerging technologies research. NETS partners will explore future integration of digital services for researchers studying ethical, legal, and social implications associated with the development of nanotechnology and other emerging technologies.

NETS will investigate digital resources to advance the collection, dissemination, and preservation of this body of research,  addressing the challenge of marshaling resources, academic collaborators, appropriately skilled data managers, and digital repository services for large-scale, multi-institutional and disciplinary research projects. The central activity of this project involves a spring 2013 workshop that will gather key researchers in the field and digital librarians together to plan the development of a disciplinary repository of data, curricula, and methodological tools.

Societal dimensions research investigating the impacts of new and emerging technologies in nanoscience is among the largest research programs of its kind in the United States, with an explicit mission to communicate outcomes and insights to the public. By 2015, scholars across the country affiliated with this program will have spent ten years collecting qualitative and quantitative data and developing analytic and methodological tools for examining the human dimensions of nanotechnology. The sharing of data and research tools in this field will foster a new kind of social science inquiry and ensure that the outcomes of research reach public audiences through multiple pathways.

NETS will be holding a stakeholders workshop June 27 – 28, 2013 (invite only), from the workshop description webpage,

What is the value of creating a dedicated Nano ELSI repository?
The benefits of having these data in a shared infrastructure are: the centralization of research and ease of discovery; uniformity of access; standardization of metadata and the description of projects; and facilitation of compliance with funder requirements for data management going forward. Additional benefits of this project will be the expansion of data curation capabilities for data repositories into the nanotechnology domain, and research into the development of disciplinary repositories, for which very little literature exists.

What would a dedicated Nano ELSI repository contain?
Potential materials that need to be curated are both qualitative and quantitative in nature, including:

  • survey instruments, data, and analyses
  • interview transcriptions and analyses
  • images or multimedia
  • reports
  • research papers, books, and their supplemental data
  • curricular materials

What will the Stakeholder Workshop accomplish?
The Stakeholder Workshop aims to bring together the key researchers and digital librarians to draft a detailed project plan for the implementation of a dedicated Nano ELSI repository. The Workshop will be used as a venue to discuss questions such as:

  • How can a repository extend research in this area?
  • What is the best way to collect all the research in this area?
  • What tools would users envision using with this resource?
  • Who should maintain and staff a repository like this?
  • How much would a repository like this cost?
  • How long will it take to implement?

What is expected of Workshop participants?
The workshop will bring together key researchers and digital librarians to discuss the requirements for a dedicated Nano ELSI repository. To inform that discussion, some participants will be requested to present on their current or past research projects and collaborations. In addition, workshop participants will be enlisted to contribute to the draft of the final project report and make recommendations for the implementation plan.

While my proposal did not get accepted (full disclosure), I do look forward to hearing more about the repository although I notice there’s no mention made of archiving the materials.

The importance of repositories and archives was brought home to me when I came across a June 4, 2013 article by Glyn Moody for Techdirt about the Tiananmen Square incident and subtle and unsubtle ways of censoring access to information,

Today is June 4th, a day pretty much like any other day in most parts of the world. But in China, June 4th has a unique significance because of the events that took place in Tiananmen Square on that day in 1989.

Moody recounts some of the ways in which people have attempted to commemorate the day online while evading the authorities’ censorship efforts. Do check out the article for the inside scoop on why ‘Big Yellow Duck’ is a censored term. One of the more subtle censorship efforts provides some chills (from the Moody article),

… according to this article in the Wall Street Journal, it looks like the Chinese authorities are trying out a new tactic for handling this dangerous topic:

On Friday, a China Real Time search for “Tiananmen Incident” did not return the customary message from Sina informing the user that search results could not be displayed due to “relevant laws, regulations and policies.” Instead the search returned results about a separate Tiananmen incident that occurred on Tomb Sweeping Day in 1976, when Beijing residents flooded the area to protest after they were prevented from mourning the recently deceased Premiere [sic] Zhou Enlai.

This business of eliminating and substituting a traumatic and disturbing historical event with something less contentious reminded me both of the saying ‘history is written by the victors’ and of Luciana Duranti and her talk titled, Trust and Authenticity in the Digital Environment: An Increasingly Cloudy Issue, which took place in Vancouver (Canada) last year (mentioned in my May 18, 2012 posting).

Duranti raised many, many issues that most of us don’t consider when we blithely store information in the ‘cloud’ or create blogs that turn out to be repositories of a sort (and then don’t know what to do with them; ça c’est moi). She also previewed a Sept. 26 – 28, 2013 conference to be hosted in Vancouver by UNESCO [United Nations Educational, Scientific, and Cultural Organization), “Memory of the World in the Digital Age: Digitization and Preservation.” (UNESCO’s Memory of the World programme hosts a number of these themed conferences and workshops.)

The Sept. 2013 UNESCO ‘memory of the world’ conference in Vancouver seems rather timely in retrospect. The Council of Canadian Academies (CCA) announced that Dr. Doug Owram would be chairing their Memory Institutions and the Digital Revolution assessment (mentioned in my Feb. 22, 2013 posting; scroll down 80% of the way) and, after checking recently, I noticed that the Expert Panel has been assembled and it includes Duranti. Here’s the assessment description from the CCA’s ‘memory institutions’ webpage,

Library and Archives Canada has asked the Council of Canadian Academies to assess how memory institutions, which include archives, libraries, museums, and other cultural institutions, can embrace the opportunities and challenges of the changing ways in which Canadians are communicating and working in the digital age.
Background

Over the past three decades, Canadians have seen a dramatic transformation in both personal and professional forms of communication due to new technologies. Where the early personal computer and word-processing systems were largely used and understood as extensions of the typewriter, advances in technology since the 1980s have enabled people to adopt different approaches to communicating and documenting their lives, culture, and work. Increased computing power, inexpensive electronic storage, and the widespread adoption of broadband computer networks have thrust methods of communication far ahead of our ability to grasp the implications of these advances.

These trends present both significant challenges and opportunities for traditional memory institutions as they work towards ensuring that valuable information is safeguarded and maintained for the long term and for the benefit of future generations. It requires that they keep track of new types of records that may be of future cultural significance, and of any changes in how decisions are being documented. As part of this assessment, the Council’s expert panel will examine the evidence as it relates to emerging trends, international best practices in archiving, and strengths and weaknesses in how Canada’s memory institutions are responding to these opportunities and challenges. Once complete, this assessment will provide an in-depth and balanced report that will support Library and Archives Canada and other memory institutions as they consider how best to manage and preserve the mass quantity of communications records generated as a result of new and emerging technologies.

The Council’s assessment is running concurrently with the Royal Society of Canada’s expert panel assessment on Libraries and Archives in 21st century Canada. Though similar in subject matter, these assessments have a different focus and follow a different process. The Council’s assessment is concerned foremost with opportunities and challenges for memory institutions as they adapt to a rapidly changing digital environment. In navigating these issues, the Council will draw on a highly qualified and multidisciplinary expert panel to undertake a rigorous assessment of the evidence and of significant international trends in policy and technology now underway. The final report will provide Canadians, policy-makers, and decision-makers with the evidence and information needed to consider policy directions. In contrast, the RSC panel focuses on the status and future of libraries and archives, and will draw upon a public engagement process.

Question

How might memory institutions embrace the opportunities and challenges posed by the changing ways in which Canadians are communicating and working in the digital age?

Sub-questions

With the use of new communication technologies, what types of records are being created and how are decisions being documented?
How is information being safeguarded for usefulness in the immediate to mid-term across technologies considering the major changes that are occurring?
How are memory institutions addressing issues posed by new technologies regarding their traditional roles in assigning value, respecting rights, and assuring authenticity and reliability?
How can memory institutions remain relevant as a trusted source of continuing information by taking advantage of the collaborative opportunities presented by new social media?

From the Expert Panel webpage (go there for all the links), here’s a complete listing of the experts,

Expert Panel on Memory Institutions and the Digital Revolution

Dr. Doug Owram, FRSC, Chair
Professor and Former Deputy Vice-Chancellor and Principal, University of British Columbia Okanagan Campus (Kelowna, BC)

Sebastian Chan     Director of Digital and Emerging Media, Smithsonian Cooper-Hewitt National Design Museum (New York, NY)

C. Colleen Cook     Trenholme Dean of Libraries, McGill University (Montréal, QC)

Luciana Duranti   Chair and Professor of Archival Studies, the School of Library, Archival and Information Studies at the University of British Columbia (Vancouver, BC)

Lesley Ellen Harris     Copyright Lawyer; Consultant, Author, and Educator; Owner, Copyrightlaws.com (Washington, D.C.)

Kate Hennessy     Assistant Professor, Simon Fraser University, School of Interactive Arts and Technology (Surrey, BC)

Kevin Kee     Associate Vice-President Research (Social Sciences and Humanities) and Canada Research Chair in Digital Humanities, Brock University (St. Catharines, ON)

Slavko Manojlovich     Associate University Librarian (Information Technology), Memorial University of Newfoundland (St. John’s, NL)

David Nostbakken     President/CEO of Nostbakken and Nostbakken, Inc. (N + N); Instructor of Strategic Communication and Social Entrepreneurship at the School of Journalism and Communication, Carleton University (Ottawa, ON)

George Oates     Art Director, Stamen Design (San Francisco, CA)

Seamus Ross     Dean and Professor, iSchool, University of Toronto (Toronto, ON)

Bill Waiser, SOM, FRSC     Professor of History and A.S. Morton Distinguished Research Chair, University of Saskatchewan (Saskatoon, SK)

Barry Wellman, FRSC     S.D. Clark Professor, Department of Sociology, University of Toronto (Toronto, ON)

I notice they have a lawyer whose specialty is copyright, Lesley Ellen Harris. I did check out her website, copyrightlaws.com and could not find anything that hinted at any strong opinions on the topic. She seems to feel that copyright is a good thing but how far she’d like to take this is a mystery to me based on the blog postings I viewed.

I’ve also noticed that this panel has 13 people, four of whom are women which equals a little more (June 5, 2013, 1:35 pm PDT, I substituted the word ‘less’ for the word ‘more'; my apologies for the arithmetic error) than 25% representation. That’s a surprising percentage given how heavily weighted the fields of library and archival studies are weighted towards women.

I have meandered somewhat but my key points are this:

  • How we are going to keep information available? It’s all very well to have repository but how long will the data be kept in the repository and where does it go afterwards?
  • There’s a bias certainly with the NETS workshop and, likely, the CCA Expert Panel on Memory Institutions and the Digital Revolution toward institutions as the source for information that’s worth keeping for however long or short a time that should be. What about individual efforts? e.g. Don’t Leave Canada Behind ; FrogHeart; Techdirt; The Last Word on Nothing, and many other blogs?
  • The online redirection of Tiananmen Square incident queries is chilling but I’ve often wondered what happen if someone wanted to remove ‘objectionable material’ from an e-book, e.g. To Kill a Mockingbird. A new reader wouldn’t notice the loss if the material has been excised in a subtle or professional  fashion.

As for how this has an impact on science, it’s been claimed that Isaac Newton attempted to excise Robert Hooke from history (my Jan. 19, 2012 posting). Whether it’s true or not, there is remarkably little about Robert Hooke despite his accomplishments and his languishment is a reminder that we must always take care that we retain our memories.

ETA June 6, 2013: David Bruggeman added some more information links about CHORUS in his June 5, 2013 post (On The Novelty Of Corporate-Government Partnership In STEM Education),

Before I dive into today’s post, a brief word about CHORUS. Thanks to commenter Joe Kraus for pointing me to this Inside Higher Ed post, which includes a link to the fact sheet CHORUS organizers distributed to reporters. While there are additional details, there are still not many details to sink one’s teeth in. And I remain surprised at the relative lack of attention the announcement has received. On a related note, nobody who’s been following open access should be surprised by Michael Eisen’s reaction to CHORUS.

I encourage you to check out David’s post as he provides some information about a new STEM (science, technology, engineering, mathematics) collaboration between the US National Science Foundation and companies such as GE and Intel.

Free the nano—stop patenting publicly funded research

Joshua Pearce, a professor at Michigan Technological University, has written a commentary on patents and nanotechnology for Nature magazine which claims the current patent regimes strangle rather than encourage innovation. From the free article,  Physics: Make nanotechnology research open-source by Joshua Pearce in Nature 491, 519–521 (22 November 2012) doi:10.1038/491519a (Note: I have removed footnotes),

Any innovator wishing to work on or sell products based on single-walled carbon nanotubes in the United States must wade through more than 1,600 US patents that mention them. He or she must obtain a fistful of licences just to use this tubular form of naturally occurring graphite rolled from a one-atom-thick sheet. This is because many patents lay broad claims: one nanotube example covers “a composition of matter comprising at least about 99% by weight of single-wall carbon molecules”. Tens of others make overlapping claims.

Patent thickets occur in other high-tech fields, but the consequences for nanotechnology are dire because of the potential power and immaturity of the field. Advances are being stifled at birth because downstream innovation almost always infringes some early broad patents. By contrast, computing, lasers and software grew up without overzealous patenting at the outset.

Nanotechnology is big business. According to a 2011 report by technology consultants Cientifica, governments around the world have invested more than US$65 billion in nanotechnology in the past 11 years [my July 15, 2011 posting features an interview with Tim Harper, Cientfica CEO and founder, about the then newly released report]. The sector contributed more than $250 billion to the global economy in 2009 and is expected to reach $2.4 trillion a year by 2015, according to business analysts Lux Research. Since 2001, the United States has invested $18 billion in the National Nanotechnology Initiative; the 2013 US federal budget will add $1.8 billion more.

This investment is spurring intense patent filing by industry and academia. The number of nanotechnology patent applications to the US Patent and Trademark Office (USPTO) is rising each year and is projected to exceed 4,000 in 2012. Anyone who discovers a new and useful process, machine, manufacture or composition of matter, or any new and useful improvement thereof, may obtain a patent that prevents others from using that development unless they have the patent owner’s permission.

Pearce makes some convincing points (Note: I have removed a footnote),

Examples of patents that cover basic components include one owned by the multinational chip manufacturer Intel, which covers a method for making almost any nanostructure with a diameter less than 50 nm; another, held by nanotechnology company NanoSys of Palo Alto, California, covers composites consisting of a matrix and any form of nanostructure. And Rice University in Houston, Texas, has a patent covering “composition of matter comprising at least about 99% by weight of fullerene nanotubes”.

The vast majority of publicly announced IP licence agreements are now exclusive, meaning that only a single person or entity may use the technology or any other technology dependent on it. This cripples competition and technological development, because all other would-be innovators are shut out of the market. Exclusive licence agreements for building-block patents can restrict entire swathes of future innovation.

Pearce’s argument for open source,

This IP rush assumes that a financial incentive is necessary to innovate, and that without the market exclusivity (monopoly) offered by a patent, development of commercially viable products will be hampered. But there is another way, as decades of innovation for free and open-source software show. Large Internet-based companies such as Google and Facebook use this type of software. Others, such as Red Hat, make more than $1 billion a year from selling services for products that they give away for free, like Red Hat’s version of the computer operating system Linux.

An open-source model would leave nanotechnology companies free to use the best tools, materials and devices available. Costs would be cut because most licence fees would no longer be necessary. Without the shelter of an IP monopoly, innovation would be a necessity for a company to survive. Openness reduces the barrier for small, nimble entities entering the market.

John Timmer in his Nov. 23, 2012 article for Wired.co.uk expresses both support and criticism,

Some of Pearce’s solutions are perfectly reasonable. He argues that the National Science Foundation adopt the NIH model of making all research it funds open access after a one-year time limit. But he also calls for an end of patents derived from any publicly funded research: “Congress should alter the Bayh-Dole Act to exclude private IP lockdown of publicly funded innovations.” There are certainly some indications that Bayh-Dole hasn’t fostered as much innovation as it might (Pearce notes that his own institution brings in 100 times more money as grants than it does from licensing patents derived from past grants), but what he’s calling for is not so much a reform of Bayh-Dole as its elimination.

Pearce wants changes in patenting to extend well beyond the academic world, too. He argues that the USPTO should put a moratorium on patents for “nanotechnology-related fundamental science, materials, and concepts.” As we described above, the difference between a process innovation and the fundamental properties resulting in nanomaterial is a very difficult thing to define. The USPTO has struggled to manage far simpler distinctions; it’s unrealistic to expect it to manage a moratorium effectively.

While Pearce points to the 3-D printing sector admiringly, there are some issues even there, as per Mike Masnick’s Nov.  21, 2012 posting on Techdirt.com (Note:  I have removed links),

We’ve been pointing out for a while that one of the reasons why advancements in 3D printing have been relatively slow is because of patents holding back the market. However, a bunch of key patents have started expiring, leading to new opportunities. One, in particular, that has received a fair bit of attention was the Formlabs 3D printer, which raised nearly $3 million on Kickstarter earlier this year. It got a ton of well-deserved attention for being one of the first “low end” (sub ~$3,000) 3D printers with very impressive quality levels.

Part of the reason the company said it could offer such a high quality printer at a such a low price, relative to competitors, was because some of the key patents had expired, allowing it to build key components without having to pay astronomical licensing fees. A company called 3D Systems, however, claims that Formlabs missed one patent. It holds US Patent 5,597,520 on a “Simultaneous multiple layer curing in stereolithography.” While I find it ridiculous that 3D Systems is going legal, rather than competing in the marketplace, it’s entirely possible that the patent is valid. It just highlights how the system holds back competition that drives important innovation, though.

3D Systems claims that Formlabs “took deliberate acts to avoid learning” about 3D Systems’ live patents. The lawsuit claims that Formlabs looked only for expired patents — which seems like a very odd claim. Why would they only seek expired patents? …

I strongly suggest reading both Pearce’s and Timmer’s articles as they both provide some very interesting perspectives about nanotechnology IP (intellectual property) open access issues. I also recommend Mike Masnick’s piece for exposure to a rather odd but unfortunately not uncommon legal suit designed to limit competition in a relatively new technology (3-D printers).

Australians weigh in on Open Access publication proposal in UK

Misguided is the word used in the June 20, 2012 editorial for The Conversation by Jason Norrie to describe the UK proposal to adopt ‘open access’ publishing, from physorg.com,

The British government has enlisted the services of Wikipedia founder Jimmy Wales in a bid to support open access publishing for all scholarly work by UK researchers, regardless of whether it is also published in a subscription-only journal.

The cost of doing so would range from £50 to £60 million a year, according to an independent study commissioned by the government. Professor Dame Janet Finch, who led the study, said that “in the longer term, the future lies with open access publishing.” Her report says that “the principle that the results of research that has been publicly funded should be freely accessible in the public domain is a compelling one, and fundamentally unanswerable.”

Norrie’s June 20,2012  editorial can also be found on The Conversation website where he includes responses from academics to the proposal,

Emeritus Professor Colin Steele, former librarian of the Australian National University, said that although report was supportive of the principles of open access, it proposed a strategy that was unnecessarily costly and could not be duplicated in Australia.

“The way they’ve gone about it almost totally focuses, presumably due to publisher pressure, on the gold model of open access,” he said. “As a result of that, the amount of money needed to carry out the transition – the money needed for article processing charges – is very large. It’s not surprising that the publishers have come out in favour of the report, because it will guarantee they retain their profits.

“It certainly wouldn’t work in Australia because there simply isn’t that amount of research council funding available.

Stevan Harnad, a Professor in the Department of Psychology at Université du Québec à Montréal, said the report had scrubbed the green model from the UK policy agenda and replaced it with a “vague, slow evolution toward gold open access publishing, at the publishers’ pace and price. The result would be very little open access, very slowly, and at a high price … taken out of already scarce UK research funds, instead of the rapid and cost-free open access growth vouchsafed by green open access mandates from funders and universities.”

For anyone not familiar with the differences between the ‘green’ and ‘gold models, the Wikipedia essay on Open Access offers a definition (Note: I have removed links and footnotes),

OA can be provided in two ways

  • Green OA Self Archiving – authors publish in any journal and then self-archive a version of the article for free public use in their institutional repository, in a central repository (such as PubMed Central), or on some other OA website What is deposited is the peer-reviewed postprint – either the author’s refereed, revised final draft or the publisher’s version of record. Green OA journal publishers endorse immediate OA self-archiving by their authors. OA self-archiving was first formally proposed in 1994 by Stevan Harnad [emphasis mine]. However, self-archiving was already being done by computer scientists in their local FTP archives in the ’80s, later harvested into Citeseer. High-energy physicists have been self-archiving centrally in arXiv since 1991.
  • Gold OA Publishing – authors publish in an open access journal that provides immediate OA to all of its articles on the publisher’s website. (Hybrid open access journals provide Gold OA only for those individual articles for which their authors (or their author’s institution or funder) pay an OA publishing fee.) Examples of OA publishers are BioMed Central and the Public Library of Science.

I guess that Wikipedia entry explains why Hamad is quoted in Norrie’s editorial.

While money is one of the most discussed issues surrounding the ‘open access publication’ discussion, I am beginning to wonder why there isn’t more mention of the individual career-building, institution science reputation-building and national science reputation-building that the current publication model helps make possible.

I have posted on this topic previously, the May 28, 2012 posting is my most comprehensive (huge) take on the subject.

As for The Conversation, it’s my first encounter with this very interesting Australian experiment in communicating research to the public, from the Who We Are page,

The Conversation is an independent source of analysis, commentary and news from the university and research sector — written by acknowledged experts and delivered directly to the public. Our team of professional editors work with more than 3,100 registered academics and researchers to make this wealth of knowledge and expertise accessible to all.

We aim to be a site you can trust. All published work will carry attribution of the authors’ expertise and, where appropriate, will disclose any potential conflicts of interest, and sources of funding. Where errors or misrepresentations occur, we will correct these promptly.

Sincere thanks go to our Founding Partners who gave initial funding support: CSIRO, Monash University, University of Melbourne, University of Technology Sydney and University of Western Australia.

Our initial content partners include those institutions, Strategic Partner RMIT University and a growing list of member institutions. More than 180 institutions contribute content, including Australia’s research-intensive, Group of Eight universities.

We are based in Melbourne, Australia, and wholly owned by The Conversation Media Trust, a not-for-profit company.

The copyright notice at the bottom of The Conversation’s web pages suggest it was founded in 2010. It certainly seems to have been embraced by Australian academics and other interested parties as per the Home page,

The Conversation is an independent source of analysis, commentary and news from the university and research sector viewed by 350,000 readers each month. Our team of professional editors work with more than 2,900 registered academics and researchers from 200 institutions.

I wonder if there’s any chance we’ll see something like this here in Canada?

Opening it all up (open software, Nature, and Naked Science)

I’m coming back to the ‘open access’ well this week since there’ve been a few new developments since my massive May 28, 2012 posting on the topic.

A June 5, 2012 posting by Glyn Moody at the Techdirt website brought yet another aspect of ‘open access’ to my attention,

Computers need software, and some of that software will be specially written or adapted from existing code to meet the particular needs of the scientists’ work. This makes computer software a vital component of the scientific process. It also means that being able to check that code for errors is as important as being able to check the rest of the experiment’s methodology. And yet very rarely can other scientists do that, because the code employed is not made available.

That’s right,  there’s open access scientific software.

Meanwhile over at the Guardian newspaper website, Paul Campbell, Nature journal’s editor-in-chief,  notes that open access to research is inevitable in a June 8, 2012 article by Alok Jha,

Open access to scientific research articles will “happen in the long run”, according to the editor-in-chief of Nature, one of the world’s premier scientific journals.

Philip Campbell said that the experience for readers and researchers of having research freely available is “very compelling”. But other academic publishers said that any large-scale transition to making research freely available had to take into account the value and investments they added to the scientific process.

“My personal belief is that that’s what’s going to happen in the long run,” said Campbell. However, he added that the case for open access was stronger for some disciplines, such as climate research, than others.

Campbell was speaking at a briefing hosted by the Science Media Centre.  Interestingly, ScienceOnline Vancouver’s upcoming (June 12, 2012, 6:30 pm mingling starts, 7-9 pm PDT for the panel discussion) meeting about open access (titled, Naked Science; Excuse me: your science is showing) features a speaker from Canada’s Science Media Centre (from the event page),

  1. Heather Piwowar is a postdoc with Duke University and the Dept of Zoology at UBC.  She’s a researcher on the NSF-funded DataONE and Dryad projects, studying data.  Specifically, how, when, and why do scientists publicly archive the datasets they collect?  When do they reuse the data of others?  What related policies and tools would help facilitate more efficient and effective use of data resources?  Heather is also a co-founder of total-impact, a web application that reveals traditional and non-traditional impact metrics of scholarly articles, datasets, software, slides, and blog posts.
  2. Heather Morrison is a Vancouver-based, well-known international open access advocate and practitioner of open scholarship, through her blogs The Imaginary Journal of Poetic Economics http://poeticeconomics.blogspot.com and her dissertation-blog http://pages.cmns.sfu.ca/heather-morrison/
  3. Lesley Evans Ogden is a freelance science journalist and the Vancouver media officer for the Science Media Centre of Canada. In the capacity of freelance journalist, she is a contributing science writer at Natural History magazine, and has written for a variety of publications including YES Mag, Scientific American (online), The Guardian, Canadian Running, and Bioscience. She has a PhD in wildlife ecology, and spent more than a decade slogging through mud and climbing mountains to study the breeding and winter ecology of migratory birds. She is also an alumni of the Science Communications program at the Banff Centre. (She will be speaking in the capacity of freelance journalist).
  4. Joy Kirchner is the Scholarly Communications Coordinator at University of British Columbia where she heads the University’s developing Copyright office in addition to the Scholarly Communications office based in the Library. Her role involves coordinating the University’s copyright education services, identifying recommended and sustainable service models to support scholarly communication activities on the campus and coordinating formalized discussion and education of these issues with faculty, students, research and publishing constituencies on the UBC campus. Joy has also been instrumental in working with faculty to host their open access journals through the Library’s open access journal hosting program; she was involved in the implementation and content recruitment of the Library’s open access  institutional repository, and she was instrumental in establishing the Provost’s Scholarly Communications Steering Committee and associated working groups where she sits as a key member of the Committee looking into an open access position at UBC amongst other things..  Joy is also chair of UBC’s Copyright Advisory Committee and working groups. She is also a faculty member with the Association of Research Libraries (ARL) / Association of College and Research Libraries (ACRL) Institute for Scholarly Communication, she assists with the coordination and program development of ACRL’s much lauded Scholarly Communications Road Show program, she is a Visiting Program Officer with ACRL in support of their scholarly communications programs, and she is a Fellow with ARL’s Research Library Leadership Fellows executive program (RLLF). Previous positions includes Librarian, for Collections, Licensing & Digital Scholarship (UBC), Electronic Resources Coordinator (Columbia Univ.), Medical & Allied Health Librarian and Science & Engineering Librarian. She holds a BA and an MLIS from the University of British Columbia.

I’m starting to get the impression that there is a concerted communications effort taking place. Between this listing and the one in my May 28, 2012 posting, there are just too many articles and events occurring to be purely chance.

Special issue on nanotechnology and regulations from EJLT

The European Journal of Law and Technology (EJLT) is featuring 15 articles on the theme of nanotechnology and regulations in a special issue. From the Dec. 12, 2011 news item on Nanowerk,

The issue contains 15 contributions that canvass some of the most pressing philosophical, ethical and regulatory questions currently being debated around the world in relation to nanotechnologies and more specifically nanomaterials.

The EJLT is an open access journal so you can view these articles or any others that may interest you. Here’s the Table of Contents for the special issue,

Table of Contents

Editorial

Editorial
Philip Leith, Abdul Paliwala

Introduction to the Special Issue

Why the elephant in the room appears to be more than a nano-sized challenge
Joel D’Silva, Diana Meagan Bowman

Nano Technology Special Edition

Decision Ethics and Emergent Technologies: The Case of Nanotechnology
David Berube
Justice or Beneficence: What Regulatory Virtue for Nano-Governance?
Hailemichael Teshome Demissie
Regulating Nanoparticles: the Problem of Uncertainty
Roger Strand, Kamilla Lein Kjølberg
Complexities of labelling of nanoproducts on the consumer markets
Harald Throne-Holst, Arie Rip
Soft regulation and responsible nanotechnological development in the European Union: Regulating occupational health and safety in the Netherlands
Bärbel Dorbeck-Jung
Nanomaterials and the European Water Framework Directive
Steffen Foss Hansen, Anders Baun, Catherine Ganzleben
The Proposed Ban on Certain Nanomaterials for Electrical and Electronic Equipment in Europe and Its Global Security Implications: A Search for an Alternative Regulatory Approach
Hitoshi Nasu, Thomas Faunce
The Regulation of Nano-particles under the European Biocidal Products Directive: Challenges for Effective Civil Society Participation
Michael T Reinsborough, Gavin Sullivan
Value chains as a linking-pin framework for exploring governance and innovation in nano-involved sectors: illustrated for nanotechnologies and the food packaging sector
Douglas Robinson
Food and nano-food within the Chinese regulatory system: no need to have overregulation.Less physicality can produce more power.
Margherita Poto
Regulation and Governance of Nanotechnology in China: Regulatory Challenges and Effectiveness
Darryl Stuart Jarvis, Noah Richmond
How Resilient is India to Nanotechnology Risks? Examining Current Developments, Capacities and an Approach for Effective Risk Governance and Regulation
Shilpanjali Deshpande Sarma
Toward Safe and Sustainable Nanomaterials: Chemical Information Call-in to Manufacturers of Nanomaterials by California as a Case Study
William Ryan, Sho Takatori, Thomas Booze, Hai-Yong Kang
De minimis curat lex: New Zealand law and the challenge of the very small
Colin Gavaghan, Jennifer Moore

I notice that the last article was authored by the same people who produced a review of New Zealand’s nanotechnology regulatory framework in Sept. 2011. The Science Media Centre of New Zealand noted this in a Sept. 6, 2011 article about the review,

The “Review of the Adequacy of New Zealand’s Regulatory Systems to Manage the Possible Impacts of Manufactured Nanomaterials” by Colin Gavaghan (in Dunedin) and Jennifer Moore (in Wellington) lists three possible levels of regulatory gaps, but points to a lack of consensus on just what constitutes a “gap”.

The authors note where such nanomaterials are not covered by existing regulation, and where these regulations are triggered by the presence of the nanomaterials. They focus on first and second generation products and say that as nanomaterials evolve, more work will need to be done on regulation.

“Some reviews of this topic have suggested that subsequent generations of nanotechnologies are likely to present a much more significant challenge to existing regulatory structures,” the authors say.

The EJLT special issue looks like it has a pretty interesting range of articles representing nanotechnology and regulations in various jurisdictions. I’m thrilled to see a couple of articles on China, one on India, and, of course, the piece on New Zealand as I don’t often find material on those countries. Thank you EJLT!

Beethoven inspires Open Research

“Professionally our methods of transmitting and reviewing the results of research are generations old and by now are totally inadequate for their purpose.” That was written in 1945, proving “plus ça change; plus c’est la même chose.” It’s taken from an essay, As We May Think, by Vannevar Bush for the July 1945 issue of The Atlantic magazine. Here’s the editor’s introduction,

As Director of the Office of Scientific Research and Development, Dr. Vannevar Bush has coordinated the activities of some six thousand leading American scientists in the application of science to warfare. In this significant article he holds up an incentive for scientists when the fighting has ceased. He urges that men of science should then turn to the massive task of making more accessible our bewildering store of knowledge. For years inventions have extended man’s physical powers rather than the powers of his mind. Trip hammers that multiply the fists, microscopes that sharpen the eye, and engines of destruction and detection are new results, but not the end results, of modern science. Now, says Dr. Bush, instruments are at hand which, if properly developed, will give man access to and command over the inherited knowledge of the ages. The perfection of these pacific instruments should be the first objective of our scientists as they emerge from their war work. Like Emerson’s famous address of 1837 on “The American Scholar,” this paper by Dr. Bush calls for a new relationship between thinking man and the sum of our knowledge. —THE EDITOR

These days with the open data and open access initiatives, there seems to be a new interest in making science more accessible and this time it’s coming from the grassroots. Over at Techdirt, Glyn Moody in his Nov. 18, 2011 posting highlights a new project for making science research accessible. It’s called ‘Beethoven’s open repository’ and here’s more about the project from the organizers (from the Transforming the way we publish research webpage),

We want to change the way research is communicated, both amongst researchers, as well as with health practitioners, patients and the wider public. Inspired by Beethoven, we want to build a research version of his repository and try to tackle the question What if the public scientific record would be updated directly as research proceeds?

Every year, over 1 million scholarly articles are being published in around 25,000 journals. No researcher – let alone the public – can keep track of all the relevant information any more, not even in small fields. To make things worse, only about 20% of these articles are freely accessible in one way or another, but the majority is not. Our project aims at providing a technically feasible solution: open-access articles that evolve along with the topic they cover.

This would allow researchers, research funders and the public to stay up to date with research in their fields of interest. It would save researchers time because when they write their results up, they could make use of the context provided by the existing articles, and outreach would be built in from the beginning, rather than being perceived as an extra burden that comes after a traditional publication. It would also save funders time because monitoring research progress would amount to checking the change logs of the respective articles. It would also save patients time, especially when a disease makes their clocks tick faster. Last but not least, it would open the doors for science as a spectator sport, and allow for enhanced interaction between citizen science and more traditional approaches to research.

Chris Mietchen is one of the moving forces (organizers) for this effort. From the About Me page,

A biophysicist by training, I have used a number of techniques from the physical sciences to investigate biological systems and their evolution. My focus so far was on the application of Magnetic Resonance Imaging techniques to fossils, embryonic development and cold tolerance but I did some excursions into music perception, measuring brain structure, or vocal production in elephants as well.

For the prototyping of Beethoven’s open repository of research, I have teamed up with brain scientist M. Fabiana Kubke (@kubke) of the University of Auckland, and we invite everyone to join us in shaping the project.

The organizers are raising funds for ‘Beethoven’s open repository’ at RocketHub. They have also posted this video (which explains the reference to Beethoven as well as other details about their project),

I have featured the issue of access to research previously in my Nov. 3, 2011 posting, Disrupting scientific research. There is also a US federal government public consultation mentioned in my Nov. 7, 2011 posting. The consultation is open to comments until January 2012.

I wish Mietchen and Kubke the best of luck as they raise funds for ‘Beethoven’s open repository’.

Trip down memory lane courtesy of the Royal Society

It’s a long trip down memory lane, courtesy of the Royal Society, all the way back to 1665 when they first started published their Philosophical Transactions. In her Oct. 26, 2011 posting in Punctuated Equilibrium on the Guardian science blogs site, GrrlScientist writes,

Beginning today, the historical archives of the peer-reviewed journal, Philosophical Transactions of the Royal Society, are permanently free to online access from anywhere in the world, according to an announcement by The Royal Society.

The Royal Society, established in 1660, began publishing the Philosophical Transactions of the Royal Society — world’s first scientific journal — in March 1665. In 1886, it was divided into two journals, Philosophical Transactions A (mathematics, physics and engineering) and Philosophical Transactions B (biological sciences), both of which are published to this day. Its historical archives are defined as all scientific papers published 70 years or longer ago. These historical archives include more than 60,000 scientific papers.

I took a peek at the 1865-1866 issue and it is quite the experience to see what was being published. Here’s an excerpt from the Table of Contents for the 1st issue (Note: I have removed links to the documents),

Epistle Dedicatory

Phil. Trans. 1665 1: doi:10.1098/rstl.1665.0001

  • ·  The Introduction

Phil. Trans. 1665 1:1-2; doi:10.1098/rstl.1665.0002

  • ·  An Accompt of the Improvement of Optick Glasses

Phil. Trans. 1665 1:2-3; doi:10.1098/rstl.1665.0003

  • ·  A Spot in One of the Belts of Jupiter

Phil. Trans. 1665 1:3; doi:10.1098/rstl.1665.0005

  • ·  The Motion of the Late Comet Praedicted

Phil. Trans. 1665 1:3-8; doi:10.1098/rstl.1665.0004

  • ·  An Experimental History of Cold

Phil. Trans. 1665 1:8-9; doi:10.1098/rstl.1665.0006

An Account of a Very Odd Monstrous Calf

Phil. Trans. 1665 1:10; doi:10.1098/rstl.1665.0007

  • ·  Of a Peculiar Lead-Ore of Germany, and the Use Thereof

Phil. Trans. 1665 1:10-11; doi:10.1098/rstl.1665.0008

I did take a look at one of the articles and found it easy to read, other than the spelling. Here’s a little more about the Philosophical Transactions from the Royal Society publishing website,

In 1662, the newly formed ‘Royal Society of London for Improving Natural Knowledge’ was granted a charter to publish by King Charles II and on 6 March 1665, the first issue of Philosophical Transactions was published under the visionary editorship of Henry Oldenburg, who was also the Secretary of the Society. … In 1886, the breadth and scope of scientific discovery had increased to such an extent that it became necessary to divide the journal into two, Philosophical Transactions A and B, covering the physical sciences and the life sciences respectively.

This initiative is part of a larger commitment to open access publishing (more from GrrlScientist’s Oct. 26, 2011 posting),

Opening its historical archive is part of the Royal Society’s ongoing commitment to open access in scientific publishing. It coincides with The Royal Society’s 5th annual Open Access Week, and also comes soon after the launch of its first ever fully open access journal, Open Biology. All of the Royal Society’s journals provide free access to selected papers, hot-off-the-presses.

There are more details about when and which journals give full open access in GrrlScientist’s post.

Princeton goes Open Access; arXiv is 10 years old

Open access to science research papers seems only right given that most Canadian research is publicly funded. (As I understand it most research worldwide is publicly funded.)

This week, Princeton University declared that their researchers’ work would be mostly open access (from the Sept. 28, 2011 news item on physrog.com),

Prestigious US academic institution Princeton University has banned researchers from giving the copyright of scholarly articles to journal publishers, except in certain cases where a waiver may be granted.

Here’s a little more from Sunanda Creagh’s (based in Australia) Sept.28, 2011 posting on The Conversation blog,

The new rule is part of an Open Access policy aimed at broadening the reach of their scholarly work and encouraging publishers to adjust standard contracts that commonly require exclusive copyright as a condition of publication.

Universities pay millions of dollars a year for academic journal subscriptions. People without subscriptions, which can cost up to $25,000 a year for some journals or hundreds of dollars for a single issue, are often prevented from reading taxpayer funded research. Individual articles are also commonly locked behind pay walls.

Researchers and peer reviewers are not paid for their work but academic publishers have said such a business model is required to maintain quality.

This Sept. 29, 2011 article by James Chang for the Princetonian adds a few more details,

“In the interest of better disseminating the fruits of our scholarship to the world, we did not want to put it artificially behind a pay wall where much of the world won’t have access to it,” committee chair and computer science professor Andrew Appel ’81 said.

The policy passed the Faculty Advisory Committee on Policy with a unanimous vote, and the proposal was approved on Sept. 19 by the general faculty without any changes.

A major challenge for the committee, which included faculty members in both the sciences and humanities, was designing a policy that could comprehensively address the different cultures of publication found across different disciplines.

While science journals have generally adopted open-access into their business models, humanities publishers have not. In the committee, there was an initial worry that bypassing the scholarly peer-review process that journals facilitate, particularly in the humanities, could hurt the scholarly industry.

At the end, however, the committee said they felt that granting the University non-exclusive rights would not harm the publishing system and would, in fact, give the University leverage in contract negotiations.

That last comment about contract negotiations is quite interesting as it brings to mind the California boycott of the Nature journals last year when Nature made a bold attempt to raise subscription fees substantively (400%) after having given the University of California special deals for years (my June 15, 2010 posting).

Creagh’s posting features some responses from Australian academics such as Simon Marginson,

Having prestigious universities such as Princeton and Harvard fly the open access flag represented a step forward, said open access advocate Professor Simon Marginson from the University of Melbourne’s Centre for the Study of Higher Education.

“The achievement of free knowledge flows, and installation of open access publishing on the web as the primary form of publishing rather than oligopolistic journal publishing subject to price barriers, now depends on whether this movement spreads further among the peak research and scholarly institutions,” he said.

“Essentially, this approach – if it becomes general – normalises an open access regime and offers authors the option of opting out of that regime. This is a large improvement on the present position whereby copyright restrictions and price barriers are normal and authors have to attempt to opt in to open access publishing, or risk prosecution by posting their work in breach of copyright.”

“The only interests that lose out under the Princeton proposal are the big journal publishers. Everyone else gains.”

Whether you view Princeton’s action as a negotiating ploy and/or a high minded attempt to give freer access to publicly funded research,  this certainly puts pressure on the business models that scholarly publishers follow.

arXiv, celebrating its 10th anniversary this year, is another open access initiative although it didn’t start that way. From the Sept. 28, 2011 news item on physorg.com,

“I’ve heard a lot about how democratic the arXiv is,” Ginsparg [Paul Ginsparg, professor of physics and information science] said Sept. 23 in a talk commemorating the anniversary. People have, for example, praised the fact that the arXiv makes scientific papers easily available to scientists in developing countries where subscriptions to journals are not always affordable. “But what I was trying to do was set up a system that eliminated the hierarchy in my field,” he said. As a physicist at Los Alamos National Laboratory, “I was receiving preprints long before graduate students further down the food chain,” Ginsparg said. “When we have success we like to think it was because we worked harder, not just because we happened to have access.”

Bill Steele’s Sept. 27, 2011 article for Cornell Univesity’s ChronicleOnline notes,

One of the surprises, Ginsparg said, is that electronic publishing has not transformed the seemingly irrational scholarly publishing system in which researchers give their work to publishing houses from which their academic institutions buy it back by subscribing to journals. Scholarly publishing is still in transition, Ginsparg said, due to questions about how to fund electronic publication and how to maintain quality control. The arXiv has no peer-review process, although it does restrict submissions to those with scientific credentials.

But the lines of communication are definitely blurring. Ginsparg reported that a recent paper posted on the arXiv by Alexander Gaeta, Cornell professor of applied and engineering physics, was picked up by bloggers and spread out from there. The paper is to be published in the journal Nature and is still under a press embargo, but an article about it has appeared in the journal Science.

Interesting, eh? It seems that scholarly publishing need not disappear but there’s no question its business models are changing.

Launching new open access (!) journal: Nanomaterials and Nanotechnology

I just got an email from someone at InTech about a new journal they are launching. There’s a call for papers for the first issue of Nanomaterials and Nanotechnology. The deadline is May 10, 2011 and the first issue will go live in June. From the email notice I received March 25, 2011,

Since all the journal’s content will be available online for free full-text download, will be fully indexed and promoted using social networks and other media, we hope that it will provide an outlet for researchers to publish their findings rapidly and at no cost to a wide global audience.

Here’s more about the journal, Nanomaterials and Nanotechnology, (drat! my linking capability disappeared again: http://www.intechweb.org/about-nanotechnology-journal.html),

Nanomaterials and Nanotechnology publishes articles that focus on, but are not limited to, the following areas:

* Synthesis of nanosized materials

* Bottom-up, top-down, and directed-assembly methods for the organization of nanostructures

* Modeling and simulation of synthesis processes

* Nanofabrication and processing of nanoscale materials and devices

* Novel growth and fabrication techniques for nanostructures

* Characterization of size-dependent properties

* Nano-characterization techniques

* Properties of nanoscale materials

* Structure analysis at atomic, molecular, and nanometric range

* Realization and application of novel nanostructures and nanodevices

* Devices and technologies based on the size-dependent electronic, optical, and magnetic properties of nanomaterials

* Nanostructured materials and nanocomposites for energy conversion applications

* Nanophotonics and nanoplasmonics materials and devices

* Nanosystems for biological, medical, chemical, catalytic, energy and environmental applications

* Nanodevices for electronic, photonic, magnetic, imaging, diagnostic and sensor applications

* Nanobiotechnology and nanomedicine

Readership

The journal is addressed to a cross-disciplinary readership including scientists, researchers and professionals in both academia and industry with an interest in nanoscience and nanotechnology. The scope comprises (but is not limited to) the fundamental aspects and applications of nanoscience and nanotechnology in the areas of physics, chemistry, materials science and engineering, biology, energy/environment, and electronics.

Type of contributions

The journal publishes a complete selection of original articles, selected as regular papers, review articles, feature articles and short communications.

Here are some important points for both readers and contributors (from the email notice),

Points of uniqueness:

1) FREE FOR ALL – Open Access and no publishing fees

2) Fast review process and online publication – One at a time model

3) International Editorial Board:

Editor-in-Chief: Paola Prete

Editorial Board: C. N. R. Rao*, Toshiaki Enoki, Stephen O’Brien, Wolfgang Richter, Federico Rosei, Jonathan E. Spanier, Leander Tapfer

*C. N. R. Rao is Linus Pauling Research Professor at the Jawaharlal Nehru Centre for Advanced Scientific Research and Honorary Professor at the Indian Institute of Science (both at Bangalore). His research interests are in the chemistry of materials. He has authored nearly 1000 research papers and edited or written 30 books in materials chemistry. A member of several academies including the Royal Society and the US National Academy of Sciences, he is a recipient of the Einstein Gold Medal of UNESCO, the Hughes Medal of the Royal Society, and the Somiya Award of the International Union of Materials Research Societies (IUMRS). In 2005, he received the Dan David Prize for materials research from Israel and the first India Science Prize.

I went to find out more about the editorial board and found this list of names and affiliations (from http://www.intechweb.org/nn-editorial-board.html),

Editorial Board

C. N. R. Rao Fellow Royal Society, National Research Professor, Linus Pauling Research Professor and President of Jawaharlal Nehru Centre for Advanced Scientific Research Bangalore, India

Toshiaki Enoki Tokyo Institute of Technology, Japan

Stephen O’Brien The City College of New York, USA

Wolfgang Richter University of Rome Tor Vergata, Italy and Technischen Universität Berlin, Germany

Federico Rosei Université du Québec, Varennes, Canada [emphasis mine]

Jonathan E. Spanier Drexel University, Philadelphia, USA

Leander Tapfer Technical Unit of Materials Technologies Brindisi, ENEA, Italy

I’ve emphasized Federico Rosei’s name as he and his work have been featured here in a few postings: Aug. 11, 2008 (http://www.frogheart.ca/?p=50); June 15, 2010 (http://www.frogheart.ca/?p=1356); and November 17, 2010 (http://www.frogheart.ca/?p=2433).

Interestingly and finally, the journal’s corporate offices are in Croatia. That’s one of the things I find so interesting about nanotechnology; it’s a very international affair.