Tag Archives: Cornell University

Using open-source software for a 3D look at nanomaterials

A 3-D view of a hyperbranched nanoparticle with complex structure, made possible by Tomviz 1.0, a new open-source software platform developed by researchers at the University of Michigan, Cornell University and Kitware Inc. Image credit: Robert Hovden, Michigan Engineering

An April 3, 2017 news item on ScienceDaily describes this new and freely available software,

Now it’s possible for anyone to see and share 3-D nanoscale imagery with a new open-source software platform developed by researchers at the University of Michigan, Cornell University and open-source software company Kitware Inc.

Tomviz 1.0 is the first open-source tool that enables researchers to easily create 3-D images from electron tomography data, then share and manipulate those images in a single platform.

A March 31, 2017 University of Michigan news release, which originated the news item, expands on the theme,

The world of nanoscale materials—things 100 nanometers and smaller—is an important place for scientists and engineers who are designing the stuff of the future: semiconductors, metal alloys and other advanced materials.

Seeing in 3-D how nanoscale flecks of platinum arrange themselves in a car’s catalytic converter, for example, or how spiky dendrites can cause short circuits inside lithium-ion batteries, could spur advances like safer, longer-lasting batteries; lighter, more fuel efficient cars; and more powerful computers.

“3-D nanoscale imagery is useful in a variety of fields, including the auto industry, semiconductors and even geology,” said Robert Hovden, U-M assistant professor of materials science engineering and one of the creators of the program. “Now you don’t have to be a tomography expert to work with these images in a meaningful way.”

Tomviz solves a key challenge: the difficulty of interpreting data from the electron microscopes that examine nanoscale objects in 3-D. The machines shoot electron beams through nanoparticles from different angles. The beams form projections as they travel through the object, a bit like nanoscale shadow puppets.

Once the machine does its work, it’s up to researchers to piece hundreds of shadows into a single three-dimensional image. It’s as difficult as it sounds—an art as well as a science. Like staining a traditional microscope slide, researchers often add shading or color to 3-D images to highlight certain attributes.

A 3-D view of a particle used in a hydrogen fuel cell powered vehicle. The gray structure is carbon; the red and blue particles are nanoscale flecks of platinum. The image is made possible by Tomviz 1.0. Image credit: Elliot Padget, Cornell UniversityA 3-D view of a particle used in a hydrogen fuel cell powered vehicle. The gray structure is carbon; the red and blue particles are nanoscale flecks of platinum. The image is made possible by Tomviz 1.0. Image credit: Elliot Padget, Cornell UniversityTraditionally, they’ve have had to rely on a hodgepodge of proprietary software to do the heavy lifting. The work is expensive and time-consuming; so much so that even big companies like automakers struggle with it. And once a 3-D image is created, it’s often impossible for other researchers to reproduce it or to share it with others.

Tomviz dramatically simplifies the process and reduces the amount of time and computing power needed to make it happen, its designers say. It also enables researchers to readily collaborate by sharing all the steps that went into creating a given image and enabling them to make tweaks of their own.

“These images are far different from the 3-D graphics you’d see at a movie theater, which are essentially cleverly lit surfaces,” Hovden said. “Tomviz explores both the surface and the interior of a nanoscale object, with detailed information about its density and structure. In some cases, we can see individual atoms.”

Key to making Tomviz happen was getting tomography experts and software developers together to collaborate, Hovden said. Their first challenge was gaining access to a large volume of high-quality tomography. The team rallied experts at Cornell, Berkeley Lab and UCLA to contribute their data, and also created their own using U-M’s microscopy center. To turn raw data into code, Hovden’s team worked with open-source software maker Kitware.

With the release of Tomviz 1.0, Hovden is looking toward the next stages of the project, where he hopes to integrate the software directly with microscopes. He believes that U-M’s atom probe tomography facilities and expertise could help him design a version that could ultimately uncover the chemistry of all atoms in 3-D.

“We are unlocking access to see new 3D nanomaterials that will power the next generation of technology,” Hovden said. “I’m very interested in pushing the boundaries of understanding materials in 3-D.”

There is a video about Tomviz,

You can download Tomviz from here and you can find Kitware here. Happy 3D nanomaterial viewing!

Tree-on-a-chip

It’s usually organ-on-a-chip or lab-on-a-chip or human-on-a-chip; this is my first tree-on-a-chip.

Engineers have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and other plants. Courtesy: MIT

From a March 20, 2017 news item on phys.org,

Trees and other plants, from towering redwoods to diminutive daisies, are nature’s hydraulic pumps. They are constantly pulling water up from their roots to the topmost leaves, and pumping sugars produced by their leaves back down to the roots. This constant stream of nutrients is shuttled through a system of tissues called xylem and phloem, which are packed together in woody, parallel conduits.

Now engineers at MIT [Massachusetts Institute of Technology] and their collaborators have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and plants. Like its natural counterparts, the chip operates passively, requiring no moving parts or external pumps. It is able to pump water and sugars through the chip at a steady flow rate for several days. The results are published this week in Nature Plants.

A March 20, 2017 MIT news release by Jennifer Chu, which originated the news item, describes the work in more detail,

Anette “Peko” Hosoi, professor and associate department head for operations in MIT’s Department of Mechanical Engineering, says the chip’s passive pumping may be leveraged as a simple hydraulic actuator for small robots. Engineers have found it difficult and expensive to make tiny, movable parts and pumps to power complex movements in small robots. The team’s new pumping mechanism may enable robots whose motions are propelled by inexpensive, sugar-powered pumps.

“The goal of this work is cheap complexity, like one sees in nature,” Hosoi says. “It’s easy to add another leaf or xylem channel in a tree. In small robotics, everything is hard, from manufacturing, to integration, to actuation. If we could make the building blocks that enable cheap complexity, that would be super exciting. I think these [microfluidic pumps] are a step in that direction.”

Hosoi’s co-authors on the paper are lead author Jean Comtet, a former graduate student in MIT’s Department of Mechanical Engineering; Kaare Jensen of the Technical University of Denmark; and Robert Turgeon and Abraham Stroock, both of Cornell University.

A hydraulic lift

The group’s tree-inspired work grew out of a project on hydraulic robots powered by pumping fluids. Hosoi was interested in designing hydraulic robots at the small scale, that could perform actions similar to much bigger robots like Boston Dynamic’s Big Dog, a four-legged, Saint Bernard-sized robot that runs and jumps over rough terrain, powered by hydraulic actuators.

“For small systems, it’s often expensive to manufacture tiny moving pieces,” Hosoi says. “So we thought, ‘What if we could make a small-scale hydraulic system that could generate large pressures, with no moving parts?’ And then we asked, ‘Does anything do this in nature?’ It turns out that trees do.”

The general understanding among biologists has been that water, propelled by surface tension, travels up a tree’s channels of xylem, then diffuses through a semipermeable membrane and down into channels of phloem that contain sugar and other nutrients.

The more sugar there is in the phloem, the more water flows from xylem to phloem to balance out the sugar-to-water gradient, in a passive process known as osmosis. The resulting water flow flushes nutrients down to the roots. Trees and plants are thought to maintain this pumping process as more water is drawn up from their roots.

“This simple model of xylem and phloem has been well-known for decades,” Hosoi says. “From a qualitative point of view, this makes sense. But when you actually run the numbers, you realize this simple model does not allow for steady flow.”

In fact, engineers have previously attempted to design tree-inspired microfluidic pumps, fabricating parts that mimic xylem and phloem. But they found that these designs quickly stopped pumping within minutes.

It was Hosoi’s student Comtet who identified a third essential part to a tree’s pumping system: its leaves, which produce sugars through photosynthesis. Comtet’s model includes this additional source of sugars that diffuse from the leaves into a plant’s phloem, increasing the sugar-to-water gradient, which in turn maintains a constant osmotic pressure, circulating water and nutrients continuously throughout a tree.

Running on sugar

With Comtet’s hypothesis in mind, Hosoi and her team designed their tree-on-a-chip, a microfluidic pump that mimics a tree’s xylem, phloem, and most importantly, its sugar-producing leaves.

To make the chip, the researchers sandwiched together two plastic slides, through which they drilled small channels to represent xylem and phloem. They filled the xylem channel with water, and the phloem channel with water and sugar, then separated the two slides with a semipermeable material to mimic the membrane between xylem and phloem. They placed another membrane over the slide containing the phloem channel, and set a sugar cube on top to represent the additional source of sugar diffusing from a tree’s leaves into the phloem. They hooked the chip up to a tube, which fed water from a tank into the chip.

With this simple setup, the chip was able to passively pump water from the tank through the chip and out into a beaker, at a constant flow rate for several days, as opposed to previous designs that only pumped for several minutes.

“As soon as we put this sugar source in, we had it running for days at a steady state,” Hosoi says. “That’s exactly what we need. We want a device we can actually put in a robot.”

Hosoi envisions that the tree-on-a-chip pump may be built into a small robot to produce hydraulically powered motions, without requiring active pumps or parts.

“If you design your robot in a smart way, you could absolutely stick a sugar cube on it and let it go,” Hosoi says.

This research was supported, in part, by the Defense Advance Research Projects Agency [DARPA].

This research’s funding connection to DARPA reminded me that MIT has an Institute of Soldier Nanotechnologies.

Getting back to the tree-on-a-chip, here’s a link to and a citation for the paper,

Passive phloem loading and long-distance transport in a synthetic tree-on-a-chip by Jean Comtet, Kaare H. Jensen, Robert Turgeon, Abraham D. Stroock & A. E. Hosoi. Nature Plants 3, Article number: 17032 (2017)  doi:10.1038/nplants.2017.32 Published online: 20 March 2017

This paper is behind a paywall.

Textiles that clean pollution from air and water

I once read that you could tell what colour would be in style by looking at the river in Milan (Italy). It may or may not still be true in Milan but it seems that the practice of using the river for dumping the fashion industry’s wastewater is still current in at least some parts of the world according to a Nov. 10, 2016 news item on Nanowerk featuring Juan Hinestroza’s work on textiles that clear pollution,

A stark and troubling reality helped spur Juan Hinestroza to what he hopes is an important discovery and a step toward cleaner manufacturing.

Hinestroza, associate professor of fiber science and director of undergraduate studies in the College of Human Ecology [Cornell University], has been to several manufacturing facilities around the globe, and he says that there are some areas of the planet in which he could identify what color is in fashion in New York or Paris by simply looking at the color of a nearby river.

“I saw it with my own eyes; it’s very sad,” he said.

Some of these overseas facilities are dumping waste products from textile dying and other processes directly into the air and waterways, making no attempt to mitigate their product’s effect on the environment.

“There are companies that make a great effort to make things in a clean and responsible manner,” he said, “but there are others that don’t.”

Hinestroza is hopeful that a technique developed at Cornell in conjunction with former Cornell chemistry professor Will Dichtel will help industry clean up its act. The group has shown the ability to infuse cotton with a beta-cyclodextrin (BCD) polymer, which acts as a filtration device that works in both water and air.

A Nov. 10, 2016 Cornell University news release by Tom Fleischman provides more detail about the research,

Cotton fabric was functionalized by making it a participant in the polymerization process. The addition of the fiber to the reaction resulted in a unique polymer grafted to the cotton surface.

“One of the limitations of some super-absorbents is that you need to be able to put them into a substrate that can be easily manufactured,” Hinestroza said. “Fibers are perfect for that – fibers are everywhere.”

Scanning electron microscopy showed that the cotton fibers appeared unchanged after the polymerization reaction. And when tested for uptake of pollutants in water (bisphenol A) and air (styrene), the polymerized fibers showed orders of magnitude greater uptakes than that of untreated cotton fabric or commercial absorbents.

Hinestroza pointed to several positives that should make this functionalized fabric technology attractive to industry.

“We’re compatible with existing textile machinery – you wouldn’t have to do a lot of retooling,” he said. “It works on both air and water, and we proved that we can remove the compounds and reuse the fiber over and over again.”

Hinestroza said the adsorption potential of this patent-pending technique could extend to other materials, and be used for respirator masks and filtration media, explosive detection and even food packaging that would detect when the product has gone bad.

And, of course, he hopes it can play a role in a cleaner, more environmentally responsible industrial practices.

“There’s a lot of pollution generation in the manufacture of textiles,” he said. “It’s just fair that we should maybe use the same textiles to clean the mess that we make.”

Here’s a link to and a citation for the paper,

Cotton Fabric Functionalized with a β-Cyclodextrin Polymer Captures Organic Pollutants from Contaminated Air and Water by Diego M. Alzate-Sánchez†, Brian J. Smith, Alaaeddin Alsbaiee, Juan P. Hinestroza, and William R. Dichtel. Chem. Mater., Article ASAP DOI: 10.1021/acs.chemmater.6b03624 Publication Date (Web): October 24, 2016

Copyright © 2016 American Chemical Society

This paper is open access.

One comment, I’m not sure how this solution will benefit the rivers unless they’re thinking that textile manufacturers will filter their waste water through this new fabric.

There is another researcher working on creating textiles that remove air pollution, Tony Ryan at the University of Sheffield (UK). My latest piece about his (and Helen Storey’s) work is a July 28, 2014 posting featuring a detergent that deposits onto the fabric nanoparticles that will clear air pollution. At the time, China was showing serious interest in the product.

The dangers of metaphors when applied to science

Metaphors can be powerful in both good ways and bad. I once read that there was a ‘lighthouse’ metaphor used to explain a scientific concept to high school students which later caused problems for them when they were studying the biological sciences as university students.  It seems there’s research now to back up the assertion about metaphors and their powers. From an Oct. 7, 2016 news item on phys.org,

Whether ideas are “like a light bulb” or come forth as “nurtured seeds,” how we describe discovery shapes people’s perceptions of both inventions and inventors. Notably, Kristen Elmore (Bronfenbrenner Center for Translational Research at Cornell University) and Myra Luna-Lucero (Teachers College, Columbia University) have shown that discovery metaphors influence our perceptions of the quality of an idea and of the ability of the idea’s creator. The research appears in the journal Social Psychological and Personality Science.

While the metaphor that ideas appear “like light bulbs” is popular and appealing, new research shows that discovery metaphors influence our understanding of the scientific process and perceptions of the ability of inventors based on their gender. [downloaded from http://www.spsp.org/news-center/press-release/metaphors-bias-perception]

While the metaphor that ideas appear “like light bulbs” is popular and appealing, new research shows that discovery metaphors influence our understanding of the scientific process and perceptions of the ability of inventors based on their gender. [downloaded from http://www.spsp.org/news-center/press-release/metaphors-bias-perception]

An Oct. 7, 2016  Society for Personality and Social Psychology news release (also on EurekAlert), which originated the news item, provides more insight into the work,

While those involved in research know there are many trials and errors and years of work before something is understood, discovered or invented, our use of words for inspiration may have an unintended and underappreciated effect of portraying good ideas as a sudden and exceptional occurrence.

In a series of experiments, Elmore and Luna-Lucero tested how people responded to ideas that were described as being “like a light bulb,” “nurtured like a seed,” or a neutral description. 

According the authors, the “light bulb metaphor implies that ‘brilliant’ ideas result from sudden and spontaneous inspiration, bestowed upon a chosen few (geniuses) while the seed metaphor implies that ideas are nurtured over time, ‘cultivated’ by anyone willing to invest effort.”

The first study looked at how people reacted to a description of Alan Turing’s invention of a precursor to the modern computer. It turns out light bulbs are more remarkable than seeds.

“We found that an idea was seen as more exceptional when described as appearing like a light bulb rather than nurtured like a seed,” said Elmore.

But this pattern changed when they used these metaphors to describe a female inventor’s ideas. When using the “like a light bulb” and “nurtured seed” metaphors, the researchers found “women were judged as better idea creators than men when ideas were described as nurtured over time like seeds.”

The results suggest gender stereotypes play a role in how people perceived the inventors.

In the third study, the researchers presented participants with descriptions of the work of either a female (Hedy Lamarr) or a male (George Antheil) inventor, who together created the idea for spread-spectrum technology (a precursor to modern wireless communications). Indeed, the seed metaphor “increased perceptions that a female inventor was a genius, while the light bulb metaphor was more consistent with stereotypical views of male genius,” stated Elmore.

Elmore plans to expand upon their research on metaphors by examining the interactions of teachers and students in real world classroom settings.

“The ways that teachers and students talk about ideas may impact students’ beliefs about how good ideas are created and who is likely to have them,” said Elmore. “Having good ideas is relevant across subjects—whether students are creating a hypothesis in science or generating a thesis for their English paper—and language that stresses the role of effort rather than inspiration in creating ideas may have real benefits for students’ motivation.”

Here’s a link to and a citation for the paper,

Light Bulbs or Seeds? How Metaphors for Ideas Influence Judgments About Genius by Kristen C. Elmore and Myra Luna-Lucero. Social Psychological and Personality Science doi: 10.1177/1948550616667611 Published online before print October 7, 2016

This paper is behind a paywall.

While Elmore and Luna-Lucero are focused on a nuanced analysis of specific metaphors, Richard Holmes’s book, ‘The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science’, notes that the ‘Eureka’  (light bulb) moment for scientific discovery and the notion of a ‘single great man’ (a singular genius) as the discoverer has its roots in romantic (Shelley, Keats, etc.) poetry.

arXiv which helped kickoff the open access movement contemplates its future

arXiv is hosted by Cornell University and lodges over a million scientific papers that are open to access by anyone. Here’s more from a July 22, 2016 news item on phys.org,

As the arXiv repository of scientific papers celebrates its 25th year as one of the scientific community’s most important means of communication, the site’s leadership is looking ahead to ensure it remains indispensable, robust and financially sustainable.

A July 21, 2016 Cornell University news release by Bill Steele, which originated the news item, provides more information about future plans and a brief history of the repository (Note: Links have been removed),

Changes and improvements are in store, many in response to suggestions received in a survey of nearly 37,000 users whose primary requests were for a more robust search engine and better facilities to share supplementary material, such as slides or code, that often accompanies scientific papers.

But even more important is to upgrade the underlying architecture of the system, much of it based on “old code,” said Oya Rieger, associate university librarian for digital scholarship and preservation services, who serves as arXiv’s program director. “We have to create a work plan to ensure that arXiv will serve for another 25 years,” she said. That will require recruiting additional programmers and finding additional sources of funding, she added.

The improvements will not change the site’s essential format or its core mission of free and open dissemination of the latest scientific research, Rieger said.

arXiv was created in 1991 by Paul Ginsparg, professor of physics and information science, when he was working at Los Alamos National Laboratory. It was then common practice for researchers to circulate “pre-prints” of their papers so that colleagues could have the advantage of knowing about their research in advance of publication in scientific journals. Ginsparg launched a service (originally running from a computer under his desk) to make the papers instantly available online.

Ginsparg brought the arXiv with him from Los Alamos when he joined the Cornell faculty in 2001. Since then, it has been managed by Cornell University Library, with Ginsparg as a member of its scientific advisory board.

In 2015, arXiv celebrated its millionth submission and saw 139 million downloads in that year alone.

Nearly 95 percent of respondents to the survey said they were satisfied with arXiv, many saying that rapid access to research results had made a difference in their careers, and applauding it as an advance in open access.

“We were amazed and heartened by the outpouring of responses representing users from a variety of countries, age groups and career stages. Their insight will help us as we refine a compelling and coherent vision for arXiv’s future,” Rieger said. “We’re continuing to explore current and emerging user needs and priorities. We hope to secure funding to revamp the service’s infrastructure and ensure that it will continue to serve as an important scientific venue for facilitating rapid dissemination of papers, which is arXiv’s core goal.”

Though some users suggested new or additional features, a majority of respondents emphasized that the clean, unencumbered nature of the site makes its use easy and efficient. “I sincerely wish academic journals could try to emulate the cleanness, convenience and user-friendly nature of the arXiv, and I hope the future of academic publishing looks more like what we’ve been able to enjoy in the arXiv,” one user wrote.

arXiv is supported by a global collective of nearly 200 libraries in 24 countries, and an ongoing grant from the Simons Foundation. In 2012, the site adopted a new funding model, in which it is collaboratively governed and supported by the research communities and institutions that benefit from it most directly.

Having a bee in my bonnet about overproduced websites (MIT [Massachusetts Institute of Technology], I’m looking at you), I can’t help but applaud this user and, of course, arXiv, “I sincerely wish academic journals could try to emulate the cleanness, convenience and user-friendly nature of the arXiv, and I hope the future of academic publishing looks more like what we’ve been able to enjoy in the arXiv, …”

For anyone interested in arXiv plans, there’s the arXiv Review Strategy here on Cornell University’s Confluence website.

Cornell University researchers breach blood-brain barrier

There are other teams working on ways to breach the blood-brain barrier (my March 26, 2015 post highlights work from a team at the University of Montréal) but this team from  Cornell is working with a drug that has already been approved by the US Food and Drug Administration (FDA) according to an April 8, 2016 news item on ScienceDaily,

Cornell researchers have discovered a way to penetrate the blood brain barrier (BBB) that may soon permit delivery of drugs directly into the brain to treat disorders such as Alzheimer’s disease and chemotherapy-resistant cancers.

The BBB is a layer of endothelial cells that selectively allow entry of molecules needed for brain function, such as amino acids, oxygen, glucose and water, while keeping others out.

Cornell researchers report that an FDA-approved drug called Lexiscan activates receptors — called adenosine receptors — that are expressed on these BBB cells.

An April 4, 2016 Cornell University news release by Krishna Ramanujan, which originated the news item, expands on the theme,

“We can open the BBB for a brief window of time, long enough to deliver therapies to the brain, but not too long so as to harm the brain. We hope in the future, this will be used to treat many types of neurological disorders,” said Margaret Bynoe, associate professor in the Department of Microbiology and Immunology in Cornell’s College of Veterinary Medicine. …

The researchers were able to deliver chemotherapy drugs into the brains of mice, as well as large molecules, like an antibody that binds to Alzheimer’s disease plaques, according to the paper.

To test whether this drug delivery system has application to the human BBB, the lab engineered a BBB model using human primary brain endothelial cells. They observed that Lexiscan opened the engineered BBB in a manner similar to its actions in mice.

Bynoe and Kim discovered that a protein called P-glycoprotein is highly expressed on brain endothelial cells and blocks the entry of most drugs delivered to the brain. Lexiscan acts on one of the adenosine receptors expressed on BBB endothelial cells specifically activating them. They showed that Lexiscan down-regulates P-glycoprotein expression and function on the BBB endothelial cells. It acts like a switch that can be turned on and off in a time dependent manner, which provides a measure of safety for the patient.

“We demonstrated that down-modulation of P-glycoprotein function coincides exquisitely with chemotherapeutic drug accumulation” in the brains of mice and across an engineered BBB using human endothelial cells, Bynoe said. “The amount of chemotherapeutic drugs that accumulated in the brain was significant.”

In addition to P-glycoprotein’s role in inhibiting foreign substances from penetrating the BBB, the protein is also expressed by many different types of cancers and makes these cancers resistant to chemotherapy.

“This finding has significant implications beyond modulation of the BBB,” Bynoe said. “It suggests that in the future, we may be able to modulate adenosine receptors to regulate P-glycoprotein in the treatment of cancer cells resistant to chemotherapy.”

Because Lexiscan is an FDA-approved drug, ”the potential for a breakthrough in drug delivery systems for diseases such as Alzheimer’s disease, Parkinson’s disease, autism, brain tumors and chemotherapy-resistant cancers is not far off,” Bynoe said.

Another advantage is that these molecules (adenosine receptors  and P-glycoprotein are naturally expressed in mammals. “We don’t have to knock out a gene or insert one for a therapy to work,” Bynoe said.

The study was funded by the National Institutes of Health and the Kwanjung Educational Foundation.

Here’s a link to and a citation for the paper,

A2A adenosine receptor modulates drug efflux transporter P-glycoprotein at the blood-brain barrier by Do-Geun Kim and Margaret S. Bynoe. J Clin Invest. doi:10.1172/JCI76207 First published April 4, 2016

Copyright © 2016, The American Society for Clinical Investigation.

This paper appears to be open access.

Using copyright to shut down easy access to scientific research

This started out as a simple post on copyright and publishers vis à vis Sci-Hub but then John Dupuis wrote a think piece (with which I disagree somewhat) on the situation in a Feb. 22, 2016 posting on his blog, Confessions of a Science Librarian. More on Dupuis and my take on it after a description of the situation.

Sci-Hub

Before getting to the controversy and legal suit, here’s a preamble about the purpose for copyright as per the US constitution from Mike Masnick’s Feb. 17, 2016 posting on Techdirt,

Lots of people are aware of the Constitutional underpinnings of our copyright system. Article 1, Section 8, Clause 8 famously says that Congress has the following power:

To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.

We’ve argued at great length over the importance of the preamble of that section, “to promote the progress,” but many people are confused about the terms “science” and “useful arts.” In fact, many people not well-versed in the issue often get the two backwards and think that “science” refers to inventions, and thus enables a patent system, while “useful arts” refers to “artistic works” and thus enables the copyright system. The opposite is actually the case. “Science” at the time the Constitution was written was actually synonymous with “learning” and “education” (while “useful arts” was a term meaning invention and new productivity tools).

While over the centuries, many who stood to benefit from an aggressive system of copyright control have tried to rewrite, whitewash or simply ignore this history, turning the copyright system falsely into a “property” regime, the fact is that it was always intended as a system to encourage the wider dissemination of ideas for the purpose of education and learning. The (potentially misguided) intent appeared to be that by granting exclusive rights to a certain limited class of works, it would encourage the creation of those works, which would then be useful in educating the public (and within a few decades enter the public domain).

Masnick’s preamble leads to a case where Elsevier (Publishers) has attempted to halt the very successful Sci-Hub, which bills itself as “the first pirate website in the world to provide mass and public access to tens of millions of research papers.” From Masnick’s Feb. 17, 2016 posting,

Rightfully, this is being celebrated as a massive boon to science and learning, making these otherwise hidden nuggets of knowledge and science that were previously locked up and hidden away available to just about anyone. And, to be clear, this absolutely fits with the original intent of copyright law — which was to encourage such learning. In a very large number of cases, it is not the creators of this content and knowledge who want the information to be locked up. Many researchers and academics know that their research has much more of an impact the wider it is seen, read, shared and built upon. But the gatekeepers — such as Elsveier and other large academic publishers — have stepped in and demanded copyright, basically for doing very little.

They do not pay the researchers for their work. Often, in fact, that work is funded by taxpayer funds. In some cases, in certain fields, the publishers actually demand that the authors of these papers pay to submit them. The journals do not pay to review the papers either. They outsource that work to other academics for “peer review” — which again, is unpaid. Finally, these publishers profit massively, having convinced many universities that they need to subscribe, often paying many tens or even hundreds of thousands of dollars for subscriptions to journals that very few actually read.

Simon Oxenham of the Neurobonkers blog on the big think website wrote a Feb. 9 (?), 2016 post about Sci-Hub, its originator, and its current legal fight (Note: Links have been removed),

On September 5th, 2011, Alexandra Elbakyan, a researcher from Kazakhstan, created Sci-Hub, a website that bypasses journal paywalls, illegally providing access to nearly every scientific paper ever published immediately to anyone who wants it. …

This was a game changer. Before September 2011, there was no way for people to freely access paywalled research en masse; researchers like Elbakyan were out in the cold. Sci-Hub is the first website to offer this service and now makes the process as simple as the click of a single button.

As the number of papers in the LibGen database expands, the frequency with which Sci-Hub has to dip into publishers’ repositories falls and consequently the risk of Sci-Hub triggering its alarm bells becomes ever smaller. Elbakyan explains, “We have already downloaded most paywalled articles to the library … we have almost everything!” This may well be no exaggeration. Elsevier, one of the most prolific and controversial scientific publishers in the world, recently alleged in court that Sci-Hub is currently harvesting Elsevier content at a rate of thousands of papers per day. Elbakyan puts the number of papers downloaded from various publishers through Sci-Hub in the range of hundreds of thousands per day, delivered to a running total of over 19 million visitors.

In one fell swoop, a network has been created that likely has a greater level of access to science than any individual university, or even government for that matter, anywhere in the world. Sci-Hub represents the sum of countless different universities’ institutional access — literally a world of knowledge. This is important now more than ever in a world where even Harvard University can no longer afford to pay skyrocketing academic journal subscription fees, while Cornell axed many of its Elsevier subscriptions over a decade ago. For researchers outside the US’ and Western Europe’s richest institutions, routine piracy has long been the only way to conduct science, but increasingly the problem of unaffordable journals is coming closer to home.

… This was the experience of Elbakyan herself, who studied in Kazakhstan University and just like other students in countries where journal subscriptions are unaffordable for institutions, was forced to pirate research in order to complete her studies. Elbakyan told me, “Prices are very high, and that made it impossible to obtain papers by purchasing. You need to read many papers for research, and when each paper costs about 30 dollars, that is impossible.”

While Sci-Hub is not expected to win its case in the US, where one judge has already ordered a preliminary injunction making its former domain unavailable. (Sci-Hub moved.) Should you be sympathetic to Elsevier, you may want to take this into account (Note: Links have been removed),

Elsevier is the world’s largest academic publisher and by far the most controversial. Over 15,000 researchers have vowed to boycott the publisher for charging “exorbitantly high prices” and bundling expensive, unwanted journals with essential journals, a practice that allegedly is bankrupting university libraries. Elsevier also supports SOPA and PIPA, which the researchers claim threatens to restrict the free exchange of information. Elsevier is perhaps most notorious for delivering takedown notices to academics, demanding them to take their own research published with Elsevier off websites like Academia.edu.

The movement against Elsevier has only gathered speed over the course of the last year with the resignation of 31 editorial board members from the Elsevier journal Lingua, who left in protest to set up their own open-access journal, Glossa. Now the battleground has moved from the comparatively niche field of linguistics to the far larger field of cognitive sciences. Last month, a petition of over 1,500 cognitive science researchers called on the editors of the Elsevier journal Cognition to demand Elsevier offer “fair open access”. Elsevier currently charges researchers $2,150 per article if researchers wish their work published in Cognition to be accessible by the public, a sum far higher than the charges that led to the Lingua mutiny.

In her letter to Sweet [New York District Court Judge Robert W. Sweet], Elbakyan made a point that will likely come as a shock to many outside the academic community: Researchers and universities don’t earn a single penny from the fees charged by publishers [emphasis mine] such as Elsevier for accepting their work, while Elsevier has an annual income over a billion U.S. dollars.

As Masnick noted, much of this research is done on the public dime (i. e., funded by taxpayers). For her part, Elbakyan has written a letter defending her actions on ethical rather than legal grounds.

I recommend reading the Oxenham article as it provides details about how the site works and includes text from the letter Elbakyan wrote.  For those who don’t have much time, Masnick’s post offers a good précis.

Sci-Hub suit as a distraction from the real issues?

Getting to Dupuis’ Feb. 22, 2016 posting and his perspective on the situation,

My take? Mostly that it’s a sideshow.

One aspect that I have ranted about on Twitter which I think is worth mentioning explicitly is that I think Elsevier and all the other big publishers are actually quite happy to feed the social media rage machine with these whack-a-mole controversies. The controversies act as a sideshow, distracting from the real issues and solutions that they would prefer all of us not to think about.

By whack-a-mole controversies I mean this recurring story of some person or company or group that wants to “free” scholarly articles and then gets sued or harassed by the big publishers or their proxies to force them to shut down. This provokes wide outrage and condemnation aimed at the publishers, especially Elsevier who is reserved a special place in hell according to most advocates of openness (myself included).

In other words: Elsevier and its ilk are thrilled to be the target of all the outrage. Focusing on the whack-a-mole game distracts us from fixing the real problem: the entrenched systems of prestige, incentive and funding in academia. As long as researchers are channelled into “high impact” journals, as long as tenure committees reward publishing in closed rather than open venues, nothing will really change. Until funders get serious about mandating true open access publishing and are willing to put their money where their intentions are, nothing will change. Or at least, progress will be mostly limited to surface victories rather than systemic change.

I think Dupuis is referencing a conflict theory (I can’t remember what it’s called) which suggests that certain types of conflicts help to keep systems in place while apparently attacking those systems. His point is well made but I disagree somewhat in that I think these conflicts can also raise awareness and activate people who might otherwise ignore or mindlessly comply with those systems. So, if Elsevier and the other publishers are using these legal suits as diversionary tactics, they may find they’ve made a strategic error.

ETA April 29, 2016: Sci-Hub does seem to move around so I’ve updated the links so it can be accessed but Sci-Hub’s situation can change at any moment.

Humans, computers, and a note of optimism

As an* antidote to my Jan. 4*, 2016 post titled: Nanotechnology and cybersecurity risks and if you’re looking to usher in 2016 on a hopeful note, this Dec. 31, 2015 Human Computation Institute news release on EurekAlert is very timely,

The combination of human and computer intelligence might be just what we need to solve the “wicked” problems of the world, such as climate change and geopolitical conflict, say researchers from the Human Computation Institute (HCI) and Cornell University.

In an article published in the journal Science, the authors present a new vision of human computation (the science of crowd-powered systems), which pushes beyond traditional limits, and takes on hard problems that until recently have remained out of reach.

Humans surpass machines at many things, ranging from simple pattern recognition to creative abstraction. With the help of computers, these cognitive abilities can be effectively combined into multidimensional collaborative networks that achieve what traditional problem-solving cannot.

Most of today’s human computation systems rely on sending bite-sized ‘micro-tasks’ to many individuals and then stitching together the results. For example, 165,000 volunteers in EyeWire have analyzed thousands of images online to help build the world’s most complete map of human retinal neurons.

This microtasking approach alone cannot address the tough challenges we face today, say the authors. A radically new approach is needed to solve “wicked problems” – those that involve many interacting systems that are constantly changing, and whose solutions have unforeseen consequences (e.g., corruption resulting from financial aid given in response to a natural disaster).

New human computation technologies can help. Recent techniques provide real-time access to crowd-based inputs, where individual contributions can be processed by a computer and sent to the next person for improvement or analysis of a different kind. This enables the construction of more flexible collaborative environments that can better address the most challenging issues.

This idea is already taking shape in several human computation projects, including YardMap.org, which was launched by the Cornell in 2012 to map global conservation efforts one parcel at a time.

“By sharing and observing practices in a map-based social network, people can begin to relate their individual efforts to the global conservation potential of living and working landscapes,” says Janis Dickinson, Professor and Director of Citizen Science at the Cornell Lab of Ornithology.

YardMap allows participants to interact and build on each other’s work – something that crowdsourcing alone cannot achieve. The project serves as an important model for how such bottom-up, socially networked systems can bring about scalable changes how we manage residential landscapes.

HCI has recently set out to use crowd-power to accelerate Cornell-based Alzheimer’s disease research. WeCureAlz.com combines two successful microtasking systems into an interactive analytic pipeline that builds blood flow models of mouse brains. The stardust@home system, which was used to search for comet dust in one million images of aerogel, is being adapted to identify stalled blood vessels, which will then be pinpointed in the brain by a modified version of the EyeWire system.

“By enabling members of the general public to play some simple online game, we expect to reduce the time to treatment discovery from decades to just a few years”, says HCI director and lead author, Dr. Pietro Michelucci. “This gives an opportunity for anyone, including the tech-savvy generation of caregivers and early stage AD patients, to take the matter into their own hands.”

Here’s a link to and a citation for the paper,

Human Computation; The power of crowds by Pietro Michelucci, and Janis L. Dickinson. Science 1 January 2016: Vol. 351 no. 6268 pp. 32-33 DOI: 10.1126/science.aad6499

This paper is behind a paywall but the abstract is freely available,

Human computation, a term introduced by Luis von Ahn (1), refers to distributed systems that combine the strengths of humans and computers to accomplish tasks that neither can do alone (2). The seminal example is reCAPTCHA, a Web widget used by 100 million people a day when they transcribe distorted text into a box to prove they are human. This free cognitive labor provides users with access to Web content and keeps websites safe from spam attacks, while feeding into a massive, crowd-powered transcription engine that has digitized 13 million articles from The New York Times archives (3). But perhaps the best known example of human computation is Wikipedia. Despite initial concerns about accuracy (4), it has become the key resource for all kinds of basic information. Information science has begun to build on these early successes, demonstrating the potential to evolve human computation systems that can model and address wicked problems (those that defy traditional problem-solving methods) at the intersection of economic, environmental, and sociopolitical systems.

*’and’ changed to ‘an’ and ‘Jan. 3, 2016’ changed to ‘Jan. 4, 2016’ on Jan. 4, 2016 at 1543 PDT.

Clues as to how mother of pearl is made

Iridescence seems to fascinate scientists and a team at Cornell University is no exception (from a Dec. 4, 2015 news item on Nanowerk),

Mother nature has a lot to teach us about how to make things.

With that in mind, Cornell researchers have uncovered the process by which mollusks manufacture nacre – commonly known as “mother of pearl.” Along with its iridescent beauty, this material found on the insides of seashells is incredibly strong. Knowing how it’s made could lead to new methods to synthesize a variety of new materials with as yet unguessed properties.

“We have all these high-tech facilities to make new materials, but just take a walk along the beach and see what’s being made,” said postdoctoral research associate Robert Hovden, M.S. ’10, Ph.D. ’14. “Nature is doing incredible nanoscience, and we need to dig into it.”

A Dec. 4, 2015 Cornell University news release by Bill Steele, which originated the news item, expands on the theme,

Using a high-resolution scanning transmission electron microscope (STEM), the researchers examined a cross section of the shell of a large Mediterranean mollusk called the noble pen shell or fan mussel (Pinna nobilis). To make the observations possible they had to develop a special sample preparation process. Using a diamond saw, they cut a thin slice through the shell, then in effect sanded it down with a thin film in which micron-sized bits of diamond were embedded, until they had a sample less than 30 nanometers thick, suitable for STEM observation. As in sanding wood, they moved from heavier grits for fast cutting to a fine final polish to make a surface free of scratches that might distort the STEM image.

Images with nanometer-scale resolution revealed that the organism builds nacre by depositing a series of layers of a material containing nanoparticles of calcium carbonate. Moving from the inside out, these particles are seen coming together in rows and fusing into flat crystals laminated between layers of organic material. (The layers are thinner than the wavelengths of visible light, causing the scattering that gives the material its iridescence.)

Exactly what happens at each step is a topic for future research. For now, the researchers said in their paper, “We cannot go back in time” to observe the process. But knowing that nanoparticles are involved is a valuable insight for materials scientists, Hovden said.

Here’s an image from the researchers,

Electron microscope image of a cross-section of a mollusk shell. The organism builds its shell from the inside out by depositing layers of calcium carbonate nanoparticles. As the particle density increases over time they fuse into large flat crystals embedded in layers of organic material to form nacre. Courtesy: Cornell University

Electron microscope image of a cross-section of a mollusk shell. The organism builds its shell from the inside out by depositing layers of calcium carbonate nanoparticles. As the particle density increases over time they fuse into large flat crystals embedded in layers of organic material to form nacre. Courtesy: Cornell University

Here’s a link to and a citation for the paper,

Nanoscale assembly processes revealed in the nacroprismatic transition zone of Pinna nobilis mollusc shells by Robert Hovden, Stephan E. Wolf, Megan E. Holtz, Frédéric Marin, David A. Muller, & Lara A. Estroff. Nature Communications 6, Article number: 10097 doi:10.1038/ncomms10097 Published 03 December 2015

This is an open access paper.