Monthly Archives: May 2012

Opening up Open Access: European Union, UK, Argentina, US, and Vancouver (Canada)

There is a furor growing internationally and it’s all about open access. It ranges from a petition in the US to a comprehensive ‘open access’ project from the European Union to a decision in the Argentinian Legislature to a speech from David Willetts, UK Minister of State for Universities and Science to an upcoming meeting in June 2012 being held in Vancouver (Canada).

As this goes forward, I’ll try to be clear as to which kind of open access I’m discussing,  open access publication (access to published research papers), open access data (access to research data), and/or both.

The European Commission has adopted a comprehensive approach to giving easy, open access to research funded through the European Union under the auspices of the current 7th Framework Programme and the upcoming Horizon 2020 (or what would have been called the 8th Framework Pr0gramme under the old system), according to the May 9, 2012 news item on Nanowerk,

To make it easier for EU-funded projects to make their findings public and more readily accessible, the Commission is funding, through FP7, the project ‘Open access infrastructure for research in Europe’ ( OpenAIRE). This ambitious project will provide a single access point to all the open access publications produced by FP7 projects during the course of the Seventh Framework Programme.

OpenAIRE is a repository network and is based on a technology developed in an earlier project called Driver. The Driver engine trawled through existing open access repositories of universities, research institutions and a growing number of open access publishers. It would index all these publications and provide a single point of entry for individuals, businesses or other scientists to search a comprehensive collection of open access resources. Today Driver boasts an impressive catalogue of almost six million taken from 327 open access repositories from across Europe and beyond.

OpenAIRE uses the same underlying technology to index FP7 publications and results. FP7 project participants are encouraged to publish their papers, reports and conference presentations to their institutional open access repositories. The OpenAIRE engine constantly trawls these repositories to identify and index any publications related to FP7-funded projects. Working closely with the European Commission’s own databases, OpenAIRE matches publications to their respective FP7 grants and projects providing a seamless link between these previously separate data sets.

OpenAIRE is also linked to CERN’s open access repository for ‘orphan’ publications. Any FP7 participants that do not have access to an own institutional repository can still submit open access publications by placing them in the CERN repository.

Here’s why I described this project as comprehensive, from the May 9, 2012 news item,

‘OpenAIRE is not just about developing new technologies,’ notes Ms Manola [Natalia Manola, the project’s manager], ‘because a significant part of the project focuses on promoting open access in the FP7 community. We are committed to promotional and policy-related activities, advocating open access publishing so projects can fully contribute to Europe’s knowledge infrastructure.’

The project is collecting usage statistics of the portal and the volume of open access publications. It will provide this information to the Commission and use this data to inform European policy in this domain.

OpenAIRE is working closely to integrate its information with the CORDA database, the master database of all EU-funded research projects. Soon it should be possible to click on a project in CORDIS (the EU’s portal for research funding), for example, and access all the open access papers published by that project. Project websites will also be able to provide links to the project’s peer reviewed publications and make dissemination of papers virtually effortless.

The project participants are also working with EU Members to develop a European-wide ‘open access helpdesk’ which will answer researchers’ questions about open access publishing and coordinate the open access initiatives currently taking place in different countries. The helpdesk will build up relationships and identify additional open access repositories to add to the OpenAIRE network.

Meanwhile, there’s been a discussion on the UK’s Guardian newspaper website about an ‘open access’ issue, money,  in a May 9, 2012 posting by John Bynner,

The present academic publishing system obstructs the free communication of research findings. By erecting paywalls, commercial publishers prevent scientists from downloading research papers unless they pay substantial fees. Libraries similarly pay huge amounts (up to £1m or more per annum) to give their readers access to online journals.

There is general agreement that free and open access to scientific knowledge is desirable. The way this might be achieved has come to the fore in recent debates about the future of scientific and scholarly journals.

Our concern lies with the major proposed alternative to the current system. Under this arrangement, authors are expected to pay when they submit papers for publication in online journals: the so called “article processing cost” (APC). The fee can amount to anything between £1,000 and £2,000 per article, depending on the reputation of the journal. Although the fees may sometimes be waived, eligibility for exemption is decided by the publisher and such concessions have no permanent status and can always be withdrawn or modified.

A major problem with the APC model is that it effectively shifts the costs of academic publishing from the reader to the author and therefore discriminates against those without access to the funds needed to meet these costs. [emphasis mine] Among those excluded are academics in, for example, the humanities and the social sciences whose research funding typically does not include publication charges, and independent researchers whose only means of paying the APC is from their own pockets. Academics in developing countries in particular face discrimination under APC because of their often very limited access to research funds.

There is another approach that could be implemented for a fraction of the cost of commercial publishers’ current journal subscriptions. “Access for all” (AFA) journals, which charge neither author nor reader, are committed to meeting publishing costs in other ways.

Bynner offers a practical solution, get the libraries to pay their subscription fees to an AFA journal, thereby funding ‘access for all’.

The open access discussion in the UK hasn’t stopped with a few posts in the Guardian, there’s also support from the government. David Willetts, in a May 2, 2012 speech to the UK Publishers Association Annual General Meeting had this to say, from the UK’s Dept. for Business Innovation and Skills website,

I realise this move to open access presents a challenge and opportunity for your industry, as you have historically received funding by charging for access to a publication. Nevertheless that funding model is surely going to have to change even beyond the positive transition to open access and hybrid journals that’s already underway. To try to preserve the old model is the wrong battle to fight. Look at how the music industry lost out by trying to criminalise a generation of young people for file sharing. [emphasis mine] It was companies outside the music business such as Spotify and Apple, with iTunes, that worked out a viable business model for access to music over the web. None of us want to see that fate overtake the publishing industry.

Wider access is the way forward. I understand the publishing industry is currently considering offering free public access to scholarly journals at all UK public libraries. This is a very useful way of extending access: it would be good for our libraries too, and I welcome it.

It would be deeply irresponsible to get rid of one business model and not put anything in its place. That is why I hosted a roundtable at BIS in March last year when all the key players discussed these issues. There was a genuine willingness to work together. As a result I commissioned Dame Janet Finch to chair an independent group of experts to investigate the issues and report back. We are grateful to the Publishers Association for playing a constructive role in her exercise, and we look forward to receiving her report in the next few weeks. No decisions will be taken until we have had the opportunity to consider it. But perhaps today I can share with you some provisional thoughts about where we are heading.

The crucial options are, as you know, called green and gold. Green means publishers are required to make research openly accessible within an agreed embargo period. This prompts a simple question: if an author’s manuscript is publicly available immediately, why should any library pay for a subscription to the version of record of any publisher’s journal? If you do not believe there is any added value in academic publishing you may view this with equanimity. But I believe that academic publishing does add value. So, in determining the embargo period, it’s necessary to strike a suitable balance between enabling revenue generation for publishers via subscriptions and providing public access to publicly funded information. In contrast, gold means that research funding includes the costs of immediate open publication, thereby allowing for full and immediate open access while still providing revenue to publishers.

In a May 22, 2012 posting at the Guardian website, Mike Taylor offers some astonishing figures (I had no idea academic publishing has been quite so lucrative) and notes that the funders have been a driving force in this ‘open access’ movement (Note: I have removed links from the excerpt),

The situation again, in short: governments and charities fund research; academics do the work, write and illustrate the papers, peer-review and edit each others’ manuscripts; then they sign copyright over to profiteering corporations who put it behind paywalls and sell research back to the public who funded it and the researchers who created it. In doing so, these corporations make grotesque profits of 32%-42% of revenue – far more than, say, Apple’s 24% or Penguin Books’ 10%. [emphasis mine]

… But what makes this story different from hundreds of other cases of commercial exploitation is that it seems to be headed for a happy ending. That’s taken some of us by surprise, because we thought the publishers held all the cards. Academics tend to be conservative, and often favour publishing their work in established paywalled journals rather than newer open access venues.

The missing factor in this equation is the funders. Governments and charitable trusts that pay academics to carry out research naturally want the results to have the greatest possible effect. That means publishing those results openly, free for anyone to use.

Taylor also goes on to mention the ongoing ‘open access’ petition in the US,

There is a feeling that the [US] administration fully understands the value of open access, and that a strong demonstration of public concern could be all it takes now to goad it into action before the November election. To that end a Whitehouse.gov petition has been set up urging Obama to “act now to implement open access policies for all federal agencies that fund scientific research”. Such policies would bring the US in line with the UK and Europe.

The people behind the US campaign have produced a video,

Anyone wondering about the reference to Elsevier may want to check out Thomas Lin’s Feb. 13, 2012 article for the New York Times,

More than 5,700 researchers have joined a boycott of Elsevier, a leading publisher of science journals, in a growing furor over open access to the fruits of scientific research.

You can find out more about the boycott and the White House petition at the Cost of Knowledge website.

Meanwhile, Canadians are being encouraged to sign the petition (by June 19, 2012), according to the folks over at ScienceOnline Vancouver in a description o f their June 12, 2012 event, Naked Science; Excuse: me your science is showing (a cheap, cheesy, and attention-getting  title—why didn’t I think of it first?),

Exposed. Transparent. Nude. All adjectives that should describe access to scientific journal articles, but currently, that’s not the case. The research paid by our Canadian taxpayer dollars is locked behind doors. The only way to access these articles is money, and lots of it!

Right now research articles costs more than a book! About $30. Only people with university affiliations have access and only journals their libraries subscribe to. Moms, dads, sisters, brothers, journalists, students, scientists, all pay for research, yet they can’t read the articles about their research without paying for it again. Now that doesn’t make sense.

….

There is also petition going around that states that research paid for by US taxpayer dollars should be available for free to US taxpayers (and others!) on the internet. Don’t worry if you are Canadian citizen, by signing this petition, Canadians would get access to the US research too and it would help convince the Canadian government to adopt similar rules. [emphasis mine]

Here’s where you can go to sign the petition. As for the notion that this will encourage the Canadian government to adopt an open access philosophy, I do not know. On the one hand, the government has opened up access to data, notably Statistics Canada data, mentioned by Frances Woolley in her March 22, 2012 posting about that and other open access data initiatives by the Canadian government on the Globe and Mail blog,

The federal government is taking steps to build the country’s data infrastructure. Last year saw the launch of the open data pilot project, data.gc.ca. Earlier this year the paywall in front of Statistics Canada’s enormous CANSIM database was taken down. The National Research Council, together with University of Guelph and Carleton University, has a new data registration service, DataCite, which allows Canadian researches to give their data permanent names in the form of digital object identifiers. In the long run, these projects should, as the press releases claim, “support innovation”, “add value-for-money for Canadians,” and promote “the reuse of existing data in commercial applications.”

That seems promising but there is a countervailing force. The Canadian government has also begun to charge subscription fees for journals that were formerly free. From the March 8, 2011 posting by Emily Chung on the CBC’s (Canadian Broadcasting Corporation) Quirks and Quarks blog,

The public has lost free online access to more than a dozen Canadian science journals as a result of the privatization of the National Research Council’s government-owned publishing arm.

Scientists, businesses, consultants, political aides and other people who want to read about new scientific discoveries in the 17 journals published by National Research Council Research Press now either have to pay $10 per article or get access through an institution that has an annual subscription.

It caused no great concern at the time,

Victoria Arbour, a University of Alberta graduate student, published her research in the Canadian Journal of Earth Sciences, one of the Canadian Science Publishing journals, both before and after it was privatized. She said it “definitely is too bad” that her new articles won’t be available to Canadians free online.

“It would have been really nice,” she said. But she said most journals aren’t open access, and the quality of the journal is a bigger concern than open access when choosing where to publish.

Then, there’s this from the new publisher, Canadian Science Publishing,

Cameron Macdonald, executive director of Canadian Science Publishing, said the impact of the change in access is “very little” on the average scientist across Canada because subscriptions have been purchased by many universities, federal science departments and scientific societies.

“I think the vast majority of researchers weren’t all that concerned,” he said. “So long as the journals continued with the same mission and mandate, they were fine with that.”

Macdonald said the journals were never strictly open access, as online access was free only inside Canadian borders and only since 2002.

So, journals that offered open access to research funded by Canadian taxpapers (to Canadians only) are now behind paywalls. Chung’s posting notes the problem already mentioned in the UK Guardian postings, money,

“It’s pretty prohibitively expensive to make things open access, I find,” she {Victoria Arbour] said.

Weir [Leslie Weir, chief librarian at the University of Ottawa] said more and more open-access journals need to impose author fees to stay afloat nowadays.

Meanwhile, the cost of electronic subscriptions to research journals has been ballooning as library budgets remain frozen, she said.

So far, no one has come up with a solution to the problem. [emphasis mine]

It seems they have designed a solution in the UK, as noted in John Bynner’s posting; perhaps we could try it out here.

Before I finish up, I should get to the situation in Argentina, from the May 27, 2012 posting on the Pasco Phronesis (David Bruggeman) blog (Note: I have removed a link in the following),

The lower house of the Argentinian legislature has approved a bill (en Español) that would require research results funded by the government be placed in institutional repositories once published.  There would be exceptions for studies involving confidential information and the law is not intended to undercut intellectual property or patent rights connected to research.  Additionally, primary research data must be published within 5 years of their collection.  This last point would, as far as I can tell, would be new ground for national open access policies, depending on how quickly the U.S. and U.K. may act on this issue.

Argentina steals a march on everyone by offering open access publication and open access data, within certain, reasonable constraints.

Getting back to David’s May 27, 2012 posting, he offers also some information on the European Union situation and some thoughts  on science policy in Egypt.

I have long been interested in open access publication as I feel it’s infuriating to be denied access to research that one has paid for in tax dollars. I have written on the topic before in my Beethoven inspires Open Research (Nov. 18, 2011 posting) and Princeton goes Open Access; arXiv is 10 years old (Sept. 30, 2011 posting) and elsewhere.

ETA May 28, 2012: I found this NRC Research Press website for the NRC journals and it states,

We are pleased to announce that Canadians can enjoy free access to over 100 000 back files of NRC Research Press journals, dating back to 1951. Access to material in these journals published after December 31, 2010, is available to Canadians through subscribing universities across Canada as well as the major federal science departments.

Concerned readers and authors whose institutes have not subscribed for the 2012 volume year can speak to their university librarians or can contact us to subscribe directly.

It’s good to see Canadians still have some access, although personally, I do prefer to read recent research.

ETA May 29, 2012: Yikes, I think this is one of the longest posts ever and I’m going to add this info. about libre redistribution and data mining as they relate to open access in this attempt to cover the topic as fully as possible in one posting.

First here’s an excerpt  from  Ross Mounce’s May 28, 2012 posting on the Palaeophylophenomics blog about ‘Libre redistribution’ (Note: I have removed a link),

I predict that the rights to electronically redistribute, and machine-read research will be vital for 21st century research – yet currently we academics often wittingly or otherwise relinquish these rights to publishers. This has got to stop. The world is networked, thus scholarly literature should move with the times and be openly networked too.

To better understand the notion of ‘libre redistribution’ you’ll want to read more of Mounce’s comments but you might also  want to check out Cameron Neylon’s comments in his March 6, 2012 posting on the Science in the Open blog,

Centralised control, failure to appreciate scale, and failure to understand the necessity of distribution and distributed systems. I have with me a device capable of holding the text of perhaps 100,000 papers It also has the processor power to mine that text. It is my phone. In 2-3 years our phones, hell our watches, will have the capacity to not only hold the world’s literature but also to mine it, in context for what I want right now. Is Bob Campbell ready for every researcher, indeed every interested person in the world, to come into his office and discuss an agreement for text mining? Because the mining I want to do and the mining that Peter Murray-Rust wants to do will be different, and what I will want to do tomorrow is different to what I want to do today. This kind of personalised mining is going to be the accepted norm of handling information online very soon and will be at the very centre of how we discover the information we need.

This moves the discussion past access (taxpayers not seeing the research they’ve funded, researchers who don’t have subscriptions, libraries not have subscriptions, etc.)  to what happens when you can get access freely. It opens up new ways of doing research by means of text mining and data mining redistribution of them both.

Double honours for NCC (ArboraNano and CelluForce recognized)

Congratulations to both ArboraNano and CelluForce (and FPInnovations, too)  on receiving a Celebrating Partnerships! Award from the Association for the Development of Research and Innovation of Québec (Canada). The May 25, 2012 news item on Azonano by Will Soutter focuses on ArboraNano,

The Association for the Development of Research and Innovation of Quebec has presented a ‘Celebrate Partnerships!’ award to ArboraNano, the Canadian Forest NanoProducts Network, for its collaborative work with CelluForce, NanoQuébec and FPInnovations in the commercialization of nanocrystalline cellulose.

ArboraNano received the award on May 17, 2012 in a ceremony conducted at Marché Bonsecours in Montréal.

The May 17, 2012 news release from CelluForce offers additional details,

In its third edition, the Celebrate Partnerships! Award recognizes partnerships between entrepreneurs and researchers from Quebec and encourages them to develop these partnerships further. Award recipients are distinguished based on the economic return resulting from their collaborations, helping to build a stronger, more innovative and competitive Quebec.

“Nanocrystalline cellulose is perhaps the most promising discovery of this Century. I salute our industrial and government partners, respectively Domtar, NRCan [Natural Resources Canada], and Quebec’s MRNF and MDBIE, for having the foresight and the courage to embark on the world’s first NCC adventure. I offer my congratulations to the devoted researchers and employees of all of our organizations for this well deserved recognition,” states Pierre Lapointe, President and Chief Executive Officer at FPInnovations.

That quote from Lapointe reflects the fact that this was composed in French where the formal style can seem fulsome to English speakers. Although even by French standards that bit about “the discovery of the Century” seems a little grandiose. Sadly, I’ve just  remembered my own comments about the Canadian tendency to be  downbeat on occasion, from my May 8, 2012 posting,

We tout innovation but at the same are deeply disconcerted by and hesitant about the risktaking required to be truly innovative. (I have to note that I too write pieces that can be quite restrained and critical of these types of endeavours.) Really, it’s as much a question of culture as anything else. How do we support innovation and risktaking while maintaining some of our quintessential character?

rather than celebrating the moment. Such a quandary! In the meantime, I trust the recipients had a good time at the party.

ETA May 29, 2012: I have been brooding about my headline since technically it is one award not two. (sigh) I’ll take the easy way out, since each partner got an award, it’s a double honour.

Science communication at the US National Academy of Sciences

I guess it’s going to be a science communication kind of day on this blog. Dr. Andrew Maynard on his 2020 Science blog posted a May 22, 2012 piece about a recent two-day science communication event at the US National Academy of Sciences in Washington, DC.

Titled The Science of Science Communication and held May 21 – 22, 2012, I was a little concerned about the content since it suggests a dedication to metrics (which are useful but I find often misused) and the possibility of a predetermined result for science communication. After watching a webcast of the first session (Introduction and Overviews offered by Baruch Fischhof [Carnegie Mellon University] and Dietram Scheufele [University of Wisconsin at Madison], 55:35 mins.), I’m relieved to say that the first two presenters mostly avoided those pitfalls.

You can go here to watch any of the sessions held during that two days, although I will warn you that these are not TED talks. The shortest are roughly 27 mins. with most running over 1 hour, while a couple  of them run over two hours.

Getting back to Andrew and his take on the proceedings, excerpted from his May 22, 2012 posting,

It’s important that the National Academies of Science are taking the study of science communication (and its practice) seriously.  Inviting a bunch of social scientists into the National Academies – and into a high profile colloquium like this – was a big deal.  And irrespective of the meeting’s content, it flags a commitment to work closely with researchers studying science communication and decision analysis to better ensure informed and effective communication strategies and practice.  Given the substantial interest in the colloquium – on the web as well as at the meeting itself – I hope that the National Academies build on this and continue to engage fully in this area.

Moving forward, there needs to be more engagement between science communication researchers and practitioners.  Practitioners of science communication – and the practical insight they bring – were notable by their absence (in the main) from the colloquium program.  If the conversation around empirical research is to connect with effective practice, there must be better integration of these two communities.

It’s interesting to read about the colloquia (the science communication event was one of a series events known as the Arthur M. Sackler Colloquia) from the perspective of a someone who was present in real time.

World Science Festival 2012

I’ve been writing about the World Science Festival in New York City for a few years now (here’s a May 5, 2011 posting about Baba Brinkman and Fotini Markopoulou-Kalamara at the 2011 festival)  and the 2012 edition is about to launch.

Tuesday, May 29, 2012, a 5th anniversary gala celebration will be hosted by Alan Alda (mentioned in my Nov. 11, 2011 posting) and Brian Greene (who co-founded the festival) at the at The Allen Room at Jazz at Lincoln Center at 7:30 pm. From the May 22, 2012 article by Dan Bacalzo for TheaterMania,

Performers will include Joshua Bell, Paige Faure, Drew Gehling, Rose Hemingway, David Hibbard, James Naughton, Momix, Debra Monk, Eryn Murman, and Abby O’Brien.

In addition, Festival co-founder Brian Greene will conduct two rarely-seen-in-public Physics experiments: the “Quantum Levitation” and “Double-Slit” experiments.

The 2012 World Science Festival runs from M1y 30, 2012 to June 3, 2012. I took a brief glance at the event listings and estimate that 25 to 30% are sold out. Tickets are still available for Cool Jobs, Cool Kids, Hot Contest, which features Baba Brinkman, the Canadian rap artist who performs the only peer-reviewed science rap in the world and was featured in last year’s Cool Jobs presentation.  From the Cool Jobs event page,

This spectacular double feature shows science in a whole new light: pure, imaginative, mind-bending fun! The big event heats up as Alan Alda hosts The Flame Challenge, a contest conceived by Alda and Stony Brook University’s Center for Communicating Science, that calls on scientists worldwide to give their best explanation of how a flame works—but in a way that makes sense to a kid. Cheer for your own favorite as Alda announces the winner chosen by hundreds of 11-year olds around the country. The excitement continues with the Festival’s ever-popular Cool Jobs, a jaw-dropping show that brings you face-to-face with amazing scientists with amazing jobs. Imagine having an office that’s a zoo and co-workers that are lemurs and porcupines. How about getting paid to build machines that can read people’s thoughts. Or imagine your desk was a basketball court and your clients were superstars trying to improve their game through biomechanics? Well, you don’t have to just imagine. Hear from scientists who have these jobs—find out what they do, how they do it, and how they got the coolest and weirdest gigs on the planet.

There will also be a street fair on Sunday, June 3, 2012 from 9:59 am to 5:59 pm (what is the significance of those hours?). From the Ultimate Science Street Fair webpage,

The Ultimate Science Street Fair returns to Washington Square Park with another action-packed day of interactive exhibits, experiments, games and shows, all designed to entertain and inspire. Visit a telepathy lab and control a computer just by thinking about it, learn the science tricks to shooting perfect free-throws with NBA stars, create your own fragrance at the Smell Lab, ride a square-wheeled tricycle, and much more!

Admission to the street fair and some of the other events are free. Of course, you do need to be in New York City.

French scientists focus optical microscopes down to 30 nm

In fact, the French scientists are using two different imaging techniques to arrive at a resolution of 30 nm for their optical microscopes, according to the May 18, 2012 news item on Nanowerk.

Researchers from the Institut Pasteur and CNRS [Centre national de la recherche scientifique] have set up a new optical microscopy approach that combines two recent imaging techniques in order to visualize molecular assemblies without affecting their biological functions, at a resolution 10 times better than that of traditional microscopes. Using this approach, they were able to observe the AIDS virus and its capsids (containing the HIV genome) within cells at a scale of 30 nanometres, for the first time with light.

More specifically,

A study coordinated by Dr Christophe Zimmer (Institut Pasteur/CNRS), in collaboration with Dr Nathalie Arhel within the lab headed by Pr Pierre Charneau (Institut Pasteur/CNRS), shows that the association of two recent imaging techniques helps obtain unique images of molecular assemblies of HIV-1 capsids, with a resolution around 10 times better than that of traditional microscopes. This new approach, which uses super-resolution imaging and FlAsH labeling, does not affect the virus’ ability to self-replicate. It represents a major step forward in molecular biology studies, enabling the visualisation of microbial complexes at a scale of 30 nm without affecting their function.

The newly developed approach combines super-resolution PALM imaging and fluorescent FlAsH labeling. PALM imaging relies on the acquisition of thousands of low-resolution images, each of which showing only a few fluorescent molecules. The molecular positions are then calculated with high accuracy by computer programs and compiled into a single high-resolution image. FlAsH labeling involves the insertion of a 6-amino-acid peptide into the protein of interest. The binding of the FlAsH fluorophore to the peptide generates a fluorescent signal, thereby enabling the visualization of the protein. For the first time, researchers have combined these two methods in order to obtain high-resolution images of molecular structures in either fixed or living cells.

The researchers have supplied an image illustrating the difference between the conventional and new techniques will allow them to view (from the May 16, 2012 press release  [communiqué de presses] on the CNRS website),

© Institut Pasteur Reconstruction optique super-résolutive de la morphologie du VIH. L'image du dessous montre la distribution moyenne de l'enzyme intégrase observée par FlAsH-PALM. La résolution de cette technique (~30 nm) permet de retrouver la taille et la forme conique de la capside. Pour comparaison, la résolution de la microscopie conventionnelle (~200-300 nm), illustrée par l'image du dessus, ne permet pas une description détaillée de cette structure.

The conventional 200 – 300 nm resolution is shown at the top while the new 30 nm resolution achieved by combining the new techniques is shown below. This new technique has already allowed scientists to disprove a popular theory about the AIDS virus, from the May 18, 2012 news item on Nanowerk,

This new method has helped researchers visualise the AIDS Virus and localise its capsids in human cells, at a scale of 30 nm. Capsids are conical structures which contain the HIV genome. These structures must dismantle in order for the viral genome to integrate itself into the host cell’s genome. However, the timing of this disassembly has long been debated. According to a prevailing view, capsids disassemble right after infection of the host cell and, therefore, do not play an important role in the intracellular transport of the virus to the host cell’s nucleus. However, the results obtained by the researchers of the Institut Pasteur and CNRS indicate that numerous capsids remain unaltered until entry of the virus into the nucleus, confirming and strengthening earlier studies based on electron microscopy. Hence, capsids could play a more important role than commonly assumed in the replication cycle of HIV.

I gather excitement about this development is high as the scientists are suggesting that ‘microscopy’ could be known as ‘nanoscopy’ in the future.

The quantum mechanics of photosynthesis

Thankfully, Jared Sagoff included a description of photosynthesis (I’ve long since forgotten the mechanics of the process) in his May 21, 2012 article, Scientists uncover a photosynthetic puzzle, on the US Dept. of Energy’s Argonne National Laboratory website. From Sagoff’s article, here’s the photosynthesis  description along with a description of the quantum effect the scientists observed,

While different species of plants, algae and bacteria have evolved a variety of different mechanisms to harvest light energy, they all share a feature known as a photosynthetic reaction center. Pigments and proteins found in the reaction center help organisms perform the initial stage of energy conversion.

These pigment molecules, or chromophores, are responsible for absorbing the energy carried by incoming light. After a photon hits the cell, it excites one of the electrons inside the chromophore. As they observed the initial step of the process, Argonne scientists saw something no one had observed before: a single photon appeared to excite different chromophores simultaneously.

Here’s a gorgeous image of a leaf provided with the article,

I was aware that scientists are working at hard at duplicating photosynthesis but until reading this upcoming excerpt from Sagoff’s article, I had not appreciated the dimensions of the problem,

The result of the study could significantly influence efforts by chemists and nanoscientists to create artificial materials and devices that can imitate natural photosynthetic systems. Researchers still have a long way to go before they will be able to create devices that match the light harvesting efficiency of a plant.

One reason for this shortcoming, Tiede [Argonne biochemist David Tiede] explained, is that artificial photosynthesis experiments have not been able to replicate the molecular matrix that contains the chromophores. “The level that we are at with artificial photosynthesis is that we can make the pigments and stick them together, but we cannot duplicate any of the external environment,” he said.  “The next step is to build in this framework, and then these kinds of quantum effects may become more apparent.”

Because the moment when the quantum effect occurs is so short-lived – less than a trillionth of a second – scientists will have a hard time ascertaining biological and physical rationales for their existence in the first place. [emphasis mine] “It makes us wonder if they are really just there by accident, or if they are telling us something subtle and unique about these materials,” Tiede said. “Whatever the case, we’re getting at the fundamentals of the first step of energy conversion in photosynthesis.”

Thanks to Nanowerk for the May 24, 2012 news item which drew this article to my attention.

Where do all those particles go or what does degradable mean at the nanoscale?

Scientists at Switzerland’s ETH Zurich (Swiss Federal Institute of Technology Zurich) note that cerium oxide nanoparticles do not degrade. From the May 21, 2012 article by Simone Ulmer on the ETH Zurich website,

Tiny particles of cerium oxide do not burn or change in the heat of a waste incineration plant. They remain intact on combustion residues or in the incineration system, as a new study by researchers from ETH Zurich reveals.

Over 100 million tons of waste are incinerated worldwide every year. Due to the increasing use of nanoparticles in construction materials, paints, textiles and cosmetics, for instance, nanoparticles also find their way into incineration plants. What happens to them there, however, had not been investigated until now. Three ETH-Zurich teams from fields of chemistry and environmental engineering thus set about finding out what happens to synthetic nano-cerium oxide during the incineration of refuse in a waste incineration plant. Cerium oxide itself is a non-toxic ceramic material, not biologically degradable and a common basic component in automobile catalytic converters and diesel soot filters.

Here’s their reasoning (from Ulmer’s article),

Experts fear that non-degradable nanomaterials might be just as harmful for humans and the environment as asbestos. As yet, however, not enough is known about the properties of nanomaterials (see ETH Life, 25 March 2010). One thing is for sure: they differ greatly from larger particles of the same material. Nanoparticles are more mobile and have a different surface structure. Knowledge of these properties is important with the increasing use of nanomaterials as, as they are transferred through incineration plants or sewage, and as they are absorbed by people in food (see ETH Life, 15 July 2008) and perhaps even through the skin and respiration, and can thus enter the body. [emphases mine]

Recent research suggests that there are many, many naturally occurring nanoparticles which we and other living beings have been innocently ingesting for millenia as noted in my Feb. 9, 2012 posting and my Nov. 24, 2011 posting. More recently, Dr. Andrew Maynard at his 2020 Science blog posted about carbon nanoparticles, which are  ubiquitous. From Andrew’s May 19, 2012 posting,

This latest paper was published in the journal Science Progress a few weeks ago, and analyzes the carbon nanoparticle content of such everyday foods as bread, caramelized sugar, corn flakes and biscuits.  The authors found that products containing caramelized sugar – including baked goods such as bread – contained spherical carbon nanoparticles in the range 4 – 30 nm (with size being associated with the temperature of caramelization).

Getting back to the cerium oxide project, here’s what the Swiss scientists found (from Ulmer’s article),

The researchers’ tests revealed that cerium oxide does not change significantly during incineration. The fly-ash separation devices proved extremely efficient: the scientists did not find any leaked cerium oxide nanoparticles in the waste incineration plant’s clean gas. That said, the nanoparticles remained loosely bound to the combustion residues in the plant and partially in the incineration system, too. The fly ash separated from the flue gas also contained cerium oxide nanoparticles.

Nowadays, combustion residues – and thus the nanoparticles bound to them – end up on landfills or are reprocessed to extract copper or aluminium, for instance. The researchers see a need for action here. “We have to make sure that new nanoparticles don’t get into the water and food cycle via landfills or released into the atmosphere through further processing measures,” says Wendelin Stark, head of the study and a professor of chemical engineering at ETH Zurich. Moreover, the fact that nanoparticles that could be inhaled if inadequate protection is worn might be present in the incineration system needs to be taken into consideration during maintenance work.

I have a couple questions for the researchers. First, is nanoscale cerium dioxide dangerous and do you have any studies?  Second, does anything ever degrade? As I recall (dimly), matter cannot be destroyed. Are they trying to break down the nanoscale cerium oxide to a smaller scale? And, what would the impact be then?

All in all, this is very interesting research to me as it has raised some questions in a way I had not previously considered. Thanks to Nanowerk where I found the May 24, 2012 news item that alerted me to the article.

Pull me in—a tractor beam in Singapore

Who hasn’t wanted a tractor beam at one time or another? The notion that beaming a ray of light at something would allow you to bring it closer is very appealing. And, if you’re willing to settle for a particle, you could have  a tractor beam in the near future according to scientists in Singapore. From the May 23, 2012 news item on Nanowerk,

Tractor beams are a well-known concept in science fiction. These rays of light are often shown pulling objects towards an observer, seemingly violating the laws of physics, and of course, such beams have yet to be realised in the real world. Haifeng Wang at the A*STAR Data Storage Institute and co-workers have now demonstrated how a tractor beam can in fact be realized on a small scale (see paper in Physical Review Letters: “Single Gradientless Light Beam Drags Particles as Tractor Beams” [behind a paywall]). “Our work demonstrates a tractor beam based only on a single laser to pull or push an object of interest toward the light source,” says Wang.

Coming up in the description of just how Wang’s tractor beam works is my second reference to Albert Einstein today (in the earlier May 23, 2012 posting: Teaching physics visually), form the news item on Nanowerk,

Based on pioneering work by Albert Einstein and Max Planck more than a hundred years ago, it is known that light carries momentum that pushes objects away. In addition, the intensity that varies across a laser beam can be used to push objects sideways, and for example can be used to move cells in biotechnology applications. Pulling an object towards an observer, however, has so far proven to be elusive. In 2011, researchers theoretically demonstrated a mechanism where light movement can be controlled using two opposing light beams — though technically, this differs from the idea behind a tractor beam.

Wang and co-workers have now studied the properties of lasers with a particular type of distribution of light intensity across the beam, or so-called Bessel beams. Usually, if a laser beam hits a small particle in its path, the light is scattered backwards, which in turn pushes the particle forward. What Wang and co-workers have now shown theoretically for Bessel beams is that for particles that are sufficiently small, the light scatters off the particle in a forward direction, meaning that the particle itself is pulled backwards towards the observer. In other words, the behaviour of the particle is the direct opposite of the usual scenario. The size of the tractor beam force depends on parameters such as the electrical and magnetic properties of the particles.

There aren’t too many real life applications for a tractor beam of limited power but the lead scientist, Wang, does suggest it could be helpful in diagnosing malaria at the cellular level.

Teaching physics visually

Art/science news  is usually about a scientist using their own art or collaborating with an artist to produce pieces that engage the public. This particular May 23, 2012 news item by Andrea Estrada on the physorg.com website offers a contrast when it highlights a teaching technique integrating visual arts with physics for physics students,

Based on research she conducted for her doctoral dissertation several years ago, Jatila van der Veen, a lecturer in the College of Creative Studies at UC [University of  California] Santa Barbara and a research associate in UC Santa Barbara’s physics department, created a new approach to introductory physics, which she calls “Noether before Newton.” Noether refers to the early 20th-century German mathematician Emmy Noether, who was known for her groundbreaking contributions to abstract algebra and theoretical physics.

Using arts-based teaching strategies, van der Veen has fashioned her course into a portal through which students not otherwise inclined might take the leap into the sciences — particularly physics and mathematics. Her research appears in the current issue of the American Educational Research Journal, in a paper titled “Draw Your Physics Homework? Art as a Path to Understanding in Physics Teaching.”

The May 22, 2012 press release on the UC Santa Barbara website provides this detail about van der Veen’s course,

While traditional introductory physics courses focus on 17th-century Newtonian mechanics, van der Veen takes a contemporary approach. “I start with symmetry and contemporary physics,” she said. “Symmetry is the underlying mathematical principle of all physics, so this allows for several different branches of inclusion, of accessibility.”

Much of van der Veen’s course is based on the principles of “aesthetic education,” an approach to teaching formulated by the educational philosopher Maxine Greene. Greene founded the Lincoln Center Institute, a joint effort of Teachers College, Columbia University, and Lincoln Center. Van der Veen is quick to point out, however, that concepts of physics are at the core of her course. “It’s not simply looking at art that’s involved in physics, or looking at beautiful pictures of galaxies, or making fractal art,” she said. “It’s using the learning modes that are available in the arts and applying them to math and physics.”

Taking a visual approach to the study of physics is not all that far-fetched. “If you read some of Albert Einstein’s writings, you’ll see they’re very visual,” van der Veen said. “And in some of his writings, he talks about how visualization played an important part in the development of his theories.”

Van der Veen has taught her introductory physics course for five years, and over that time has collected data from one particular homework assignment she gives her students: She asks them to read an article by Einstein on the nature of science, and then draw their understanding of it. “I found over the years that no one ever produced the same drawing from the same article,” she said. “I also found that some students think very concretely in words, some think concretely in symbols, some think allegorically, and some think metaphorically.”

Adopting arts-based teaching strategies does not make van der Veen’s course any less rigorous than traditional introductory courses in terms of the abstract concepts students are required to master. It creates a different, more inclusive way of achieving the same end.

I went to look at van der Veen’s webpage on the UC Santa Barbara website to find a link to this latest article (open access) of hers and some of her other projects. I have taken a brief look at the Draw your physics homework? article (tir is 53 pp.) and found these images on p. 29 (PDF) illustrating her approach,

Figure 5. Abstract-representational drawings. 5a (left): female math major, first year; 5b (right): male math major, third year. Used with permission. (downloaded from the American Educational Research Journal, vol. 49, April 2012)

Van der Veen offers some context on the page preceding the image, p. 28,

Two other examples of abstract-representational drawings are shown in Figure 5. I do not have written descriptions, but in each case I determined that each student understood the article by means of verbal explanation. Figure 5a was drawn by a first-year math major, female, in 2010. She explained the meaning of her drawing as representing Einstein’s layers from sensory input (shaded ball at the bottom), to secondary layer of concepts, represented by the two open circles, and finally up to the third level, which explains everything below with a unified theory. The dashes surrounding the perimeter, she told me, represent the limit of our present knowledge. Figure 5b was drawn by a third-year male math major. He explained that the brick-like objects in the foreground are sensory perceptions, and the shaded portion in the center of the drawing, which appears behind the bricks, is the theoretical explanation which unifies all the experiences.

I find the reference to Einstein and visualization compelling in light of the increased interest (as I perceive it) in visualization currently occurring in the sciences.

Brains in the US Congress

Tomorrow, May 24, 2012, Jean Paul Allain, associate professor of nuclear engineering at Purdue University (Illinois) will be speaking to members of the US Congress about repairing brain injuries using nanotechnology-enabled bioactive coatings for stents. From the May 21, 2012 news item on Nanowerk,

“Stents coated with a bioactive coating might be inserted at the site of an aneurism to help heal the inside lining of the blood vessel,” said Jean Paul Allain, an associate professor of nuclear engineering. “Aneurisms are saclike bulges in blood vessels caused by weakening of artery walls. We’re talking about using a regenerative approach, attracting cells to reconstruct the arterial wall.”

He will speak before Congress on Thursday (May 24) during the first Brain Mapping Day to discuss the promise of nanotechnology in treating brain injury and disease.

The May 21, 2012 news release (by Emil Venere) for Purdue University offers insight into some of the difficulties of dealing with aneurysms using today’s technologies,

Currently, aneurisms are treated either by performing brain surgery, opening the skull and clipping the sac, or by inserting a catheter through an artery into the brain and implanting a metallic coil into the balloon-like sac.

Both procedures risk major complications, including massive bleeding or the formation of potentially fatal blood clots.

“The survival rate is about 50/50 or worse, and those who do survive could be impaired,” said Allain, who holds a courtesy appointment with materials engineering and is affiliated with the Birck Nanotechnology Center in Purdue’s Discovery Park.

Allain goes on to explain how his team’s research addresses these issues (from the May 21, 2012 Purdue University news release),

Cells needed to repair blood vessels are influenced by both the surface texture – features such as bumps and irregular shapes as tiny as 10 nanometers wide – as well as the surface chemistry of the stent materials.

“We are learning how to regulate cell proliferation and growth by tailoring both the function of surface chemistry and topology,” Allain said. “There is correlation between surface chemistry and how cells send signals back and forth for proliferation. So the surface needs to be tailored to promote regenerative healing.”

The facility being used to irradiate the stents – the Radiation Surface Science and Engineering Laboratory in Purdue’s School of Nuclear Engineering – also is used for work aimed at developing linings for experimental nuclear fusion reactors for power generation.

Irradiating materials with the ion beams causes surface features to “self-organize” and also influences the surface chemistry, Allain said.

The stents are made of nonmagnetic materials, such as stainless steel and an alloy of nickel and titanium. Only a certain part of the stents is rendered magnetic to precisely direct the proliferation of cells to repair a blood vessel where it begins bulging to form the aneurism.

Researchers will study the stents using blood from pigs during the first phase in collaboration with the Walter Reed National Military Medical Center.

The stent coating’s surface is “functionalized” so that it interacts properly with the blood-vessel tissue. Some of the cells are magnetic naturally, and “magnetic nanoparticles” would be injected into the bloodstream to speed tissue regeneration. Researchers also are aiming to engineer the stents so that they show up in medical imaging to reveal how the coatings hold up in the bloodstream.

The research is led by Allain and co-principal investigator Lisa Reece of the Birck Nanotechnology Center. This effort has spawned new collaborations with researchers around the world including those at Universidad de Antioquía, University of Queensland. The research also involves doctoral students Ravi Kempaiah and Emily Walker.

The work is funded with a three-year, $1.5 million grant from the U.S. Army. Cells needed to repair blood vessels are influenced by both the surface texture – features such as bumps and irregular shapes as tiny as 10 nanometers wide – as well as the surface chemistry of the stent materials.

As I find the international flavour to the pursuit of science quite engaging, I want to highlight this bit in the May 21, 2012 news item on Nanowerk which mentions a few other collaborators on this project,

Purdue researchers are working with Col. Rocco Armonda, Dr. Teodoro Tigno and other neurosurgeons at Walter Reed National Military Medical Center in Bethesda, Md. Collaborations also are planned with research scientists from the University of Queensland in Australia, Universidad de Antioquía and Universidad de Los Andes, both in Colombia.

The US Congress is not the only place to hear about this work, Allain will also be speaking in Toronto at the 9th Annual World Congress of Society for Brain Mapping & Therapeutics (SBMT) being held June 2 – 4, 2012.