Tag Archives: Cornell University

Textiles that clean pollution from air and water

I once read that you could tell what colour would be in style by looking at the river in Milan (Italy). It may or may not still be true in Milan but it seems that the practice of using the river for dumping the fashion industry’s wastewater is still current in at least some parts of the world according to a Nov. 10, 2016 news item on Nanowerk featuring Juan Hinestroza’s work on textiles that clear pollution,

A stark and troubling reality helped spur Juan Hinestroza to what he hopes is an important discovery and a step toward cleaner manufacturing.

Hinestroza, associate professor of fiber science and director of undergraduate studies in the College of Human Ecology [Cornell University], has been to several manufacturing facilities around the globe, and he says that there are some areas of the planet in which he could identify what color is in fashion in New York or Paris by simply looking at the color of a nearby river.

“I saw it with my own eyes; it’s very sad,” he said.

Some of these overseas facilities are dumping waste products from textile dying and other processes directly into the air and waterways, making no attempt to mitigate their product’s effect on the environment.

“There are companies that make a great effort to make things in a clean and responsible manner,” he said, “but there are others that don’t.”

Hinestroza is hopeful that a technique developed at Cornell in conjunction with former Cornell chemistry professor Will Dichtel will help industry clean up its act. The group has shown the ability to infuse cotton with a beta-cyclodextrin (BCD) polymer, which acts as a filtration device that works in both water and air.

A Nov. 10, 2016 Cornell University news release by Tom Fleischman provides more detail about the research,

Cotton fabric was functionalized by making it a participant in the polymerization process. The addition of the fiber to the reaction resulted in a unique polymer grafted to the cotton surface.

“One of the limitations of some super-absorbents is that you need to be able to put them into a substrate that can be easily manufactured,” Hinestroza said. “Fibers are perfect for that – fibers are everywhere.”

Scanning electron microscopy showed that the cotton fibers appeared unchanged after the polymerization reaction. And when tested for uptake of pollutants in water (bisphenol A) and air (styrene), the polymerized fibers showed orders of magnitude greater uptakes than that of untreated cotton fabric or commercial absorbents.

Hinestroza pointed to several positives that should make this functionalized fabric technology attractive to industry.

“We’re compatible with existing textile machinery – you wouldn’t have to do a lot of retooling,” he said. “It works on both air and water, and we proved that we can remove the compounds and reuse the fiber over and over again.”

Hinestroza said the adsorption potential of this patent-pending technique could extend to other materials, and be used for respirator masks and filtration media, explosive detection and even food packaging that would detect when the product has gone bad.

And, of course, he hopes it can play a role in a cleaner, more environmentally responsible industrial practices.

“There’s a lot of pollution generation in the manufacture of textiles,” he said. “It’s just fair that we should maybe use the same textiles to clean the mess that we make.”

Here’s a link to and a citation for the paper,

Cotton Fabric Functionalized with a β-Cyclodextrin Polymer Captures Organic Pollutants from Contaminated Air and Water by Diego M. Alzate-Sánchez†, Brian J. Smith, Alaaeddin Alsbaiee, Juan P. Hinestroza, and William R. Dichtel. Chem. Mater., Article ASAP DOI: 10.1021/acs.chemmater.6b03624 Publication Date (Web): October 24, 2016

Copyright © 2016 American Chemical Society

This paper is open access.

One comment, I’m not sure how this solution will benefit the rivers unless they’re thinking that textile manufacturers will filter their waste water through this new fabric.

There is another researcher working on creating textiles that remove air pollution, Tony Ryan at the University of Sheffield (UK). My latest piece about his (and Helen Storey’s) work is a July 28, 2014 posting featuring a detergent that deposits onto the fabric nanoparticles that will clear air pollution. At the time, China was showing serious interest in the product.

The dangers of metaphors when applied to science

Metaphors can be powerful in both good ways and bad. I once read that there was a ‘lighthouse’ metaphor used to explain a scientific concept to high school students which later caused problems for them when they were studying the biological sciences as university students.  It seems there’s research now to back up the assertion about metaphors and their powers. From an Oct. 7, 2016 news item on phys.org,

Whether ideas are “like a light bulb” or come forth as “nurtured seeds,” how we describe discovery shapes people’s perceptions of both inventions and inventors. Notably, Kristen Elmore (Bronfenbrenner Center for Translational Research at Cornell University) and Myra Luna-Lucero (Teachers College, Columbia University) have shown that discovery metaphors influence our perceptions of the quality of an idea and of the ability of the idea’s creator. The research appears in the journal Social Psychological and Personality Science.

While the metaphor that ideas appear “like light bulbs” is popular and appealing, new research shows that discovery metaphors influence our understanding of the scientific process and perceptions of the ability of inventors based on their gender. [downloaded from http://www.spsp.org/news-center/press-release/metaphors-bias-perception]

While the metaphor that ideas appear “like light bulbs” is popular and appealing, new research shows that discovery metaphors influence our understanding of the scientific process and perceptions of the ability of inventors based on their gender. [downloaded from http://www.spsp.org/news-center/press-release/metaphors-bias-perception]

An Oct. 7, 2016  Society for Personality and Social Psychology news release (also on EurekAlert), which originated the news item, provides more insight into the work,

While those involved in research know there are many trials and errors and years of work before something is understood, discovered or invented, our use of words for inspiration may have an unintended and underappreciated effect of portraying good ideas as a sudden and exceptional occurrence.

In a series of experiments, Elmore and Luna-Lucero tested how people responded to ideas that were described as being “like a light bulb,” “nurtured like a seed,” or a neutral description. 

According the authors, the “light bulb metaphor implies that ‘brilliant’ ideas result from sudden and spontaneous inspiration, bestowed upon a chosen few (geniuses) while the seed metaphor implies that ideas are nurtured over time, ‘cultivated’ by anyone willing to invest effort.”

The first study looked at how people reacted to a description of Alan Turing’s invention of a precursor to the modern computer. It turns out light bulbs are more remarkable than seeds.

“We found that an idea was seen as more exceptional when described as appearing like a light bulb rather than nurtured like a seed,” said Elmore.

But this pattern changed when they used these metaphors to describe a female inventor’s ideas. When using the “like a light bulb” and “nurtured seed” metaphors, the researchers found “women were judged as better idea creators than men when ideas were described as nurtured over time like seeds.”

The results suggest gender stereotypes play a role in how people perceived the inventors.

In the third study, the researchers presented participants with descriptions of the work of either a female (Hedy Lamarr) or a male (George Antheil) inventor, who together created the idea for spread-spectrum technology (a precursor to modern wireless communications). Indeed, the seed metaphor “increased perceptions that a female inventor was a genius, while the light bulb metaphor was more consistent with stereotypical views of male genius,” stated Elmore.

Elmore plans to expand upon their research on metaphors by examining the interactions of teachers and students in real world classroom settings.

“The ways that teachers and students talk about ideas may impact students’ beliefs about how good ideas are created and who is likely to have them,” said Elmore. “Having good ideas is relevant across subjects—whether students are creating a hypothesis in science or generating a thesis for their English paper—and language that stresses the role of effort rather than inspiration in creating ideas may have real benefits for students’ motivation.”

Here’s a link to and a citation for the paper,

Light Bulbs or Seeds? How Metaphors for Ideas Influence Judgments About Genius by Kristen C. Elmore and Myra Luna-Lucero. Social Psychological and Personality Science doi: 10.1177/1948550616667611 Published online before print October 7, 2016

This paper is behind a paywall.

While Elmore and Luna-Lucero are focused on a nuanced analysis of specific metaphors, Richard Holmes’s book, ‘The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science’, notes that the ‘Eureka’  (light bulb) moment for scientific discovery and the notion of a ‘single great man’ (a singular genius) as the discoverer has its roots in romantic (Shelley, Keats, etc.) poetry.

arXiv which helped kickoff the open access movement contemplates its future

arXiv is hosted by Cornell University and lodges over a million scientific papers that are open to access by anyone. Here’s more from a July 22, 2016 news item on phys.org,

As the arXiv repository of scientific papers celebrates its 25th year as one of the scientific community’s most important means of communication, the site’s leadership is looking ahead to ensure it remains indispensable, robust and financially sustainable.

A July 21, 2016 Cornell University news release by Bill Steele, which originated the news item, provides more information about future plans and a brief history of the repository (Note: Links have been removed),

Changes and improvements are in store, many in response to suggestions received in a survey of nearly 37,000 users whose primary requests were for a more robust search engine and better facilities to share supplementary material, such as slides or code, that often accompanies scientific papers.

But even more important is to upgrade the underlying architecture of the system, much of it based on “old code,” said Oya Rieger, associate university librarian for digital scholarship and preservation services, who serves as arXiv’s program director. “We have to create a work plan to ensure that arXiv will serve for another 25 years,” she said. That will require recruiting additional programmers and finding additional sources of funding, she added.

The improvements will not change the site’s essential format or its core mission of free and open dissemination of the latest scientific research, Rieger said.

arXiv was created in 1991 by Paul Ginsparg, professor of physics and information science, when he was working at Los Alamos National Laboratory. It was then common practice for researchers to circulate “pre-prints” of their papers so that colleagues could have the advantage of knowing about their research in advance of publication in scientific journals. Ginsparg launched a service (originally running from a computer under his desk) to make the papers instantly available online.

Ginsparg brought the arXiv with him from Los Alamos when he joined the Cornell faculty in 2001. Since then, it has been managed by Cornell University Library, with Ginsparg as a member of its scientific advisory board.

In 2015, arXiv celebrated its millionth submission and saw 139 million downloads in that year alone.

Nearly 95 percent of respondents to the survey said they were satisfied with arXiv, many saying that rapid access to research results had made a difference in their careers, and applauding it as an advance in open access.

“We were amazed and heartened by the outpouring of responses representing users from a variety of countries, age groups and career stages. Their insight will help us as we refine a compelling and coherent vision for arXiv’s future,” Rieger said. “We’re continuing to explore current and emerging user needs and priorities. We hope to secure funding to revamp the service’s infrastructure and ensure that it will continue to serve as an important scientific venue for facilitating rapid dissemination of papers, which is arXiv’s core goal.”

Though some users suggested new or additional features, a majority of respondents emphasized that the clean, unencumbered nature of the site makes its use easy and efficient. “I sincerely wish academic journals could try to emulate the cleanness, convenience and user-friendly nature of the arXiv, and I hope the future of academic publishing looks more like what we’ve been able to enjoy in the arXiv,” one user wrote.

arXiv is supported by a global collective of nearly 200 libraries in 24 countries, and an ongoing grant from the Simons Foundation. In 2012, the site adopted a new funding model, in which it is collaboratively governed and supported by the research communities and institutions that benefit from it most directly.

Having a bee in my bonnet about overproduced websites (MIT [Massachusetts Institute of Technology], I’m looking at you), I can’t help but applaud this user and, of course, arXiv, “I sincerely wish academic journals could try to emulate the cleanness, convenience and user-friendly nature of the arXiv, and I hope the future of academic publishing looks more like what we’ve been able to enjoy in the arXiv, …”

For anyone interested in arXiv plans, there’s the arXiv Review Strategy here on Cornell University’s Confluence website.

Cornell University researchers breach blood-brain barrier

There are other teams working on ways to breach the blood-brain barrier (my March 26, 2015 post highlights work from a team at the University of Montréal) but this team from  Cornell is working with a drug that has already been approved by the US Food and Drug Administration (FDA) according to an April 8, 2016 news item on ScienceDaily,

Cornell researchers have discovered a way to penetrate the blood brain barrier (BBB) that may soon permit delivery of drugs directly into the brain to treat disorders such as Alzheimer’s disease and chemotherapy-resistant cancers.

The BBB is a layer of endothelial cells that selectively allow entry of molecules needed for brain function, such as amino acids, oxygen, glucose and water, while keeping others out.

Cornell researchers report that an FDA-approved drug called Lexiscan activates receptors — called adenosine receptors — that are expressed on these BBB cells.

An April 4, 2016 Cornell University news release by Krishna Ramanujan, which originated the news item, expands on the theme,

“We can open the BBB for a brief window of time, long enough to deliver therapies to the brain, but not too long so as to harm the brain. We hope in the future, this will be used to treat many types of neurological disorders,” said Margaret Bynoe, associate professor in the Department of Microbiology and Immunology in Cornell’s College of Veterinary Medicine. …

The researchers were able to deliver chemotherapy drugs into the brains of mice, as well as large molecules, like an antibody that binds to Alzheimer’s disease plaques, according to the paper.

To test whether this drug delivery system has application to the human BBB, the lab engineered a BBB model using human primary brain endothelial cells. They observed that Lexiscan opened the engineered BBB in a manner similar to its actions in mice.

Bynoe and Kim discovered that a protein called P-glycoprotein is highly expressed on brain endothelial cells and blocks the entry of most drugs delivered to the brain. Lexiscan acts on one of the adenosine receptors expressed on BBB endothelial cells specifically activating them. They showed that Lexiscan down-regulates P-glycoprotein expression and function on the BBB endothelial cells. It acts like a switch that can be turned on and off in a time dependent manner, which provides a measure of safety for the patient.

“We demonstrated that down-modulation of P-glycoprotein function coincides exquisitely with chemotherapeutic drug accumulation” in the brains of mice and across an engineered BBB using human endothelial cells, Bynoe said. “The amount of chemotherapeutic drugs that accumulated in the brain was significant.”

In addition to P-glycoprotein’s role in inhibiting foreign substances from penetrating the BBB, the protein is also expressed by many different types of cancers and makes these cancers resistant to chemotherapy.

“This finding has significant implications beyond modulation of the BBB,” Bynoe said. “It suggests that in the future, we may be able to modulate adenosine receptors to regulate P-glycoprotein in the treatment of cancer cells resistant to chemotherapy.”

Because Lexiscan is an FDA-approved drug, ”the potential for a breakthrough in drug delivery systems for diseases such as Alzheimer’s disease, Parkinson’s disease, autism, brain tumors and chemotherapy-resistant cancers is not far off,” Bynoe said.

Another advantage is that these molecules (adenosine receptors  and P-glycoprotein are naturally expressed in mammals. “We don’t have to knock out a gene or insert one for a therapy to work,” Bynoe said.

The study was funded by the National Institutes of Health and the Kwanjung Educational Foundation.

Here’s a link to and a citation for the paper,

A2A adenosine receptor modulates drug efflux transporter P-glycoprotein at the blood-brain barrier by Do-Geun Kim and Margaret S. Bynoe. J Clin Invest. doi:10.1172/JCI76207 First published April 4, 2016

Copyright © 2016, The American Society for Clinical Investigation.

This paper appears to be open access.

Using copyright to shut down easy access to scientific research

This started out as a simple post on copyright and publishers vis à vis Sci-Hub but then John Dupuis wrote a think piece (with which I disagree somewhat) on the situation in a Feb. 22, 2016 posting on his blog, Confessions of a Science Librarian. More on Dupuis and my take on it after a description of the situation.

Sci-Hub

Before getting to the controversy and legal suit, here’s a preamble about the purpose for copyright as per the US constitution from Mike Masnick’s Feb. 17, 2016 posting on Techdirt,

Lots of people are aware of the Constitutional underpinnings of our copyright system. Article 1, Section 8, Clause 8 famously says that Congress has the following power:

To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.

We’ve argued at great length over the importance of the preamble of that section, “to promote the progress,” but many people are confused about the terms “science” and “useful arts.” In fact, many people not well-versed in the issue often get the two backwards and think that “science” refers to inventions, and thus enables a patent system, while “useful arts” refers to “artistic works” and thus enables the copyright system. The opposite is actually the case. “Science” at the time the Constitution was written was actually synonymous with “learning” and “education” (while “useful arts” was a term meaning invention and new productivity tools).

While over the centuries, many who stood to benefit from an aggressive system of copyright control have tried to rewrite, whitewash or simply ignore this history, turning the copyright system falsely into a “property” regime, the fact is that it was always intended as a system to encourage the wider dissemination of ideas for the purpose of education and learning. The (potentially misguided) intent appeared to be that by granting exclusive rights to a certain limited class of works, it would encourage the creation of those works, which would then be useful in educating the public (and within a few decades enter the public domain).

Masnick’s preamble leads to a case where Elsevier (Publishers) has attempted to halt the very successful Sci-Hub, which bills itself as “the first pirate website in the world to provide mass and public access to tens of millions of research papers.” From Masnick’s Feb. 17, 2016 posting,

Rightfully, this is being celebrated as a massive boon to science and learning, making these otherwise hidden nuggets of knowledge and science that were previously locked up and hidden away available to just about anyone. And, to be clear, this absolutely fits with the original intent of copyright law — which was to encourage such learning. In a very large number of cases, it is not the creators of this content and knowledge who want the information to be locked up. Many researchers and academics know that their research has much more of an impact the wider it is seen, read, shared and built upon. But the gatekeepers — such as Elsveier and other large academic publishers — have stepped in and demanded copyright, basically for doing very little.

They do not pay the researchers for their work. Often, in fact, that work is funded by taxpayer funds. In some cases, in certain fields, the publishers actually demand that the authors of these papers pay to submit them. The journals do not pay to review the papers either. They outsource that work to other academics for “peer review” — which again, is unpaid. Finally, these publishers profit massively, having convinced many universities that they need to subscribe, often paying many tens or even hundreds of thousands of dollars for subscriptions to journals that very few actually read.

Simon Oxenham of the Neurobonkers blog on the big think website wrote a Feb. 9 (?), 2016 post about Sci-Hub, its originator, and its current legal fight (Note: Links have been removed),

On September 5th, 2011, Alexandra Elbakyan, a researcher from Kazakhstan, created Sci-Hub, a website that bypasses journal paywalls, illegally providing access to nearly every scientific paper ever published immediately to anyone who wants it. …

This was a game changer. Before September 2011, there was no way for people to freely access paywalled research en masse; researchers like Elbakyan were out in the cold. Sci-Hub is the first website to offer this service and now makes the process as simple as the click of a single button.

As the number of papers in the LibGen database expands, the frequency with which Sci-Hub has to dip into publishers’ repositories falls and consequently the risk of Sci-Hub triggering its alarm bells becomes ever smaller. Elbakyan explains, “We have already downloaded most paywalled articles to the library … we have almost everything!” This may well be no exaggeration. Elsevier, one of the most prolific and controversial scientific publishers in the world, recently alleged in court that Sci-Hub is currently harvesting Elsevier content at a rate of thousands of papers per day. Elbakyan puts the number of papers downloaded from various publishers through Sci-Hub in the range of hundreds of thousands per day, delivered to a running total of over 19 million visitors.

In one fell swoop, a network has been created that likely has a greater level of access to science than any individual university, or even government for that matter, anywhere in the world. Sci-Hub represents the sum of countless different universities’ institutional access — literally a world of knowledge. This is important now more than ever in a world where even Harvard University can no longer afford to pay skyrocketing academic journal subscription fees, while Cornell axed many of its Elsevier subscriptions over a decade ago. For researchers outside the US’ and Western Europe’s richest institutions, routine piracy has long been the only way to conduct science, but increasingly the problem of unaffordable journals is coming closer to home.

… This was the experience of Elbakyan herself, who studied in Kazakhstan University and just like other students in countries where journal subscriptions are unaffordable for institutions, was forced to pirate research in order to complete her studies. Elbakyan told me, “Prices are very high, and that made it impossible to obtain papers by purchasing. You need to read many papers for research, and when each paper costs about 30 dollars, that is impossible.”

While Sci-Hub is not expected to win its case in the US, where one judge has already ordered a preliminary injunction making its former domain unavailable. (Sci-Hub moved.) Should you be sympathetic to Elsevier, you may want to take this into account (Note: Links have been removed),

Elsevier is the world’s largest academic publisher and by far the most controversial. Over 15,000 researchers have vowed to boycott the publisher for charging “exorbitantly high prices” and bundling expensive, unwanted journals with essential journals, a practice that allegedly is bankrupting university libraries. Elsevier also supports SOPA and PIPA, which the researchers claim threatens to restrict the free exchange of information. Elsevier is perhaps most notorious for delivering takedown notices to academics, demanding them to take their own research published with Elsevier off websites like Academia.edu.

The movement against Elsevier has only gathered speed over the course of the last year with the resignation of 31 editorial board members from the Elsevier journal Lingua, who left in protest to set up their own open-access journal, Glossa. Now the battleground has moved from the comparatively niche field of linguistics to the far larger field of cognitive sciences. Last month, a petition of over 1,500 cognitive science researchers called on the editors of the Elsevier journal Cognition to demand Elsevier offer “fair open access”. Elsevier currently charges researchers $2,150 per article if researchers wish their work published in Cognition to be accessible by the public, a sum far higher than the charges that led to the Lingua mutiny.

In her letter to Sweet [New York District Court Judge Robert W. Sweet], Elbakyan made a point that will likely come as a shock to many outside the academic community: Researchers and universities don’t earn a single penny from the fees charged by publishers [emphasis mine] such as Elsevier for accepting their work, while Elsevier has an annual income over a billion U.S. dollars.

As Masnick noted, much of this research is done on the public dime (i. e., funded by taxpayers). For her part, Elbakyan has written a letter defending her actions on ethical rather than legal grounds.

I recommend reading the Oxenham article as it provides details about how the site works and includes text from the letter Elbakyan wrote.  For those who don’t have much time, Masnick’s post offers a good précis.

Sci-Hub suit as a distraction from the real issues?

Getting to Dupuis’ Feb. 22, 2016 posting and his perspective on the situation,

My take? Mostly that it’s a sideshow.

One aspect that I have ranted about on Twitter which I think is worth mentioning explicitly is that I think Elsevier and all the other big publishers are actually quite happy to feed the social media rage machine with these whack-a-mole controversies. The controversies act as a sideshow, distracting from the real issues and solutions that they would prefer all of us not to think about.

By whack-a-mole controversies I mean this recurring story of some person or company or group that wants to “free” scholarly articles and then gets sued or harassed by the big publishers or their proxies to force them to shut down. This provokes wide outrage and condemnation aimed at the publishers, especially Elsevier who is reserved a special place in hell according to most advocates of openness (myself included).

In other words: Elsevier and its ilk are thrilled to be the target of all the outrage. Focusing on the whack-a-mole game distracts us from fixing the real problem: the entrenched systems of prestige, incentive and funding in academia. As long as researchers are channelled into “high impact” journals, as long as tenure committees reward publishing in closed rather than open venues, nothing will really change. Until funders get serious about mandating true open access publishing and are willing to put their money where their intentions are, nothing will change. Or at least, progress will be mostly limited to surface victories rather than systemic change.

I think Dupuis is referencing a conflict theory (I can’t remember what it’s called) which suggests that certain types of conflicts help to keep systems in place while apparently attacking those systems. His point is well made but I disagree somewhat in that I think these conflicts can also raise awareness and activate people who might otherwise ignore or mindlessly comply with those systems. So, if Elsevier and the other publishers are using these legal suits as diversionary tactics, they may find they’ve made a strategic error.

ETA April 29, 2016: Sci-Hub does seem to move around so I’ve updated the links so it can be accessed but Sci-Hub’s situation can change at any moment.

Humans, computers, and a note of optimism

As an* antidote to my Jan. 4*, 2016 post titled: Nanotechnology and cybersecurity risks and if you’re looking to usher in 2016 on a hopeful note, this Dec. 31, 2015 Human Computation Institute news release on EurekAlert is very timely,

The combination of human and computer intelligence might be just what we need to solve the “wicked” problems of the world, such as climate change and geopolitical conflict, say researchers from the Human Computation Institute (HCI) and Cornell University.

In an article published in the journal Science, the authors present a new vision of human computation (the science of crowd-powered systems), which pushes beyond traditional limits, and takes on hard problems that until recently have remained out of reach.

Humans surpass machines at many things, ranging from simple pattern recognition to creative abstraction. With the help of computers, these cognitive abilities can be effectively combined into multidimensional collaborative networks that achieve what traditional problem-solving cannot.

Most of today’s human computation systems rely on sending bite-sized ‘micro-tasks’ to many individuals and then stitching together the results. For example, 165,000 volunteers in EyeWire have analyzed thousands of images online to help build the world’s most complete map of human retinal neurons.

This microtasking approach alone cannot address the tough challenges we face today, say the authors. A radically new approach is needed to solve “wicked problems” – those that involve many interacting systems that are constantly changing, and whose solutions have unforeseen consequences (e.g., corruption resulting from financial aid given in response to a natural disaster).

New human computation technologies can help. Recent techniques provide real-time access to crowd-based inputs, where individual contributions can be processed by a computer and sent to the next person for improvement or analysis of a different kind. This enables the construction of more flexible collaborative environments that can better address the most challenging issues.

This idea is already taking shape in several human computation projects, including YardMap.org, which was launched by the Cornell in 2012 to map global conservation efforts one parcel at a time.

“By sharing and observing practices in a map-based social network, people can begin to relate their individual efforts to the global conservation potential of living and working landscapes,” says Janis Dickinson, Professor and Director of Citizen Science at the Cornell Lab of Ornithology.

YardMap allows participants to interact and build on each other’s work – something that crowdsourcing alone cannot achieve. The project serves as an important model for how such bottom-up, socially networked systems can bring about scalable changes how we manage residential landscapes.

HCI has recently set out to use crowd-power to accelerate Cornell-based Alzheimer’s disease research. WeCureAlz.com combines two successful microtasking systems into an interactive analytic pipeline that builds blood flow models of mouse brains. The stardust@home system, which was used to search for comet dust in one million images of aerogel, is being adapted to identify stalled blood vessels, which will then be pinpointed in the brain by a modified version of the EyeWire system.

“By enabling members of the general public to play some simple online game, we expect to reduce the time to treatment discovery from decades to just a few years”, says HCI director and lead author, Dr. Pietro Michelucci. “This gives an opportunity for anyone, including the tech-savvy generation of caregivers and early stage AD patients, to take the matter into their own hands.”

Here’s a link to and a citation for the paper,

Human Computation; The power of crowds by Pietro Michelucci, and Janis L. Dickinson. Science 1 January 2016: Vol. 351 no. 6268 pp. 32-33 DOI: 10.1126/science.aad6499

This paper is behind a paywall but the abstract is freely available,

Human computation, a term introduced by Luis von Ahn (1), refers to distributed systems that combine the strengths of humans and computers to accomplish tasks that neither can do alone (2). The seminal example is reCAPTCHA, a Web widget used by 100 million people a day when they transcribe distorted text into a box to prove they are human. This free cognitive labor provides users with access to Web content and keeps websites safe from spam attacks, while feeding into a massive, crowd-powered transcription engine that has digitized 13 million articles from The New York Times archives (3). But perhaps the best known example of human computation is Wikipedia. Despite initial concerns about accuracy (4), it has become the key resource for all kinds of basic information. Information science has begun to build on these early successes, demonstrating the potential to evolve human computation systems that can model and address wicked problems (those that defy traditional problem-solving methods) at the intersection of economic, environmental, and sociopolitical systems.

*’and’ changed to ‘an’ and ‘Jan. 3, 2016’ changed to ‘Jan. 4, 2016’ on Jan. 4, 2016 at 1543 PDT.

Clues as to how mother of pearl is made

Iridescence seems to fascinate scientists and a team at Cornell University is no exception (from a Dec. 4, 2015 news item on Nanowerk),

Mother nature has a lot to teach us about how to make things.

With that in mind, Cornell researchers have uncovered the process by which mollusks manufacture nacre – commonly known as “mother of pearl.” Along with its iridescent beauty, this material found on the insides of seashells is incredibly strong. Knowing how it’s made could lead to new methods to synthesize a variety of new materials with as yet unguessed properties.

“We have all these high-tech facilities to make new materials, but just take a walk along the beach and see what’s being made,” said postdoctoral research associate Robert Hovden, M.S. ’10, Ph.D. ’14. “Nature is doing incredible nanoscience, and we need to dig into it.”

A Dec. 4, 2015 Cornell University news release by Bill Steele, which originated the news item, expands on the theme,

Using a high-resolution scanning transmission electron microscope (STEM), the researchers examined a cross section of the shell of a large Mediterranean mollusk called the noble pen shell or fan mussel (Pinna nobilis). To make the observations possible they had to develop a special sample preparation process. Using a diamond saw, they cut a thin slice through the shell, then in effect sanded it down with a thin film in which micron-sized bits of diamond were embedded, until they had a sample less than 30 nanometers thick, suitable for STEM observation. As in sanding wood, they moved from heavier grits for fast cutting to a fine final polish to make a surface free of scratches that might distort the STEM image.

Images with nanometer-scale resolution revealed that the organism builds nacre by depositing a series of layers of a material containing nanoparticles of calcium carbonate. Moving from the inside out, these particles are seen coming together in rows and fusing into flat crystals laminated between layers of organic material. (The layers are thinner than the wavelengths of visible light, causing the scattering that gives the material its iridescence.)

Exactly what happens at each step is a topic for future research. For now, the researchers said in their paper, “We cannot go back in time” to observe the process. But knowing that nanoparticles are involved is a valuable insight for materials scientists, Hovden said.

Here’s an image from the researchers,

Electron microscope image of a cross-section of a mollusk shell. The organism builds its shell from the inside out by depositing layers of calcium carbonate nanoparticles. As the particle density increases over time they fuse into large flat crystals embedded in layers of organic material to form nacre. Courtesy: Cornell University

Electron microscope image of a cross-section of a mollusk shell. The organism builds its shell from the inside out by depositing layers of calcium carbonate nanoparticles. As the particle density increases over time they fuse into large flat crystals embedded in layers of organic material to form nacre. Courtesy: Cornell University

Here’s a link to and a citation for the paper,

Nanoscale assembly processes revealed in the nacroprismatic transition zone of Pinna nobilis mollusc shells by Robert Hovden, Stephan E. Wolf, Megan E. Holtz, Frédéric Marin, David A. Muller, & Lara A. Estroff. Nature Communications 6, Article number: 10097 doi:10.1038/ncomms10097 Published 03 December 2015

This is an open access paper.

Café Scientifique (Vancouver, Canada) and noise on Oct. 27, 2015

On Tuesday, October 27, 2015  Café Scientifique, in the back room of The Railway Club (2nd floor of 579 Dunsmuir St. [at Seymour St.]), will be hosting a talk on the history of noise (from the Oct. 13, 2015 announcement),

Our speaker for the evening will be Dr. Shawn Bullock.  The title of his talk is:

The History of Noise: Perspectives from Physics and Engineering

The word “noise” is often synonymous with “nuisance,” which implies something to be avoided as much as possible. We label blaring sirens, the space between stations on the radio dial and the din of a busy street as “noise.” Is noise simply a sound we don’t like? We will consider the evolution of how scientists and engineers have thought about noise, beginning in the 19th-century and continuing to the present day. We will explore the idea of noise both as a social construction and as a technological necessity. We’ll also touch on critical developments in the study of sound, the history of physics and engineering, and the development of communications technology.

This description is almost identical to the description Bullock gave for a November 2014 talk he titled: Snap, Crackle, Pop!: A Short History of Noise which he summarizes this way after delivering the talk,

I used ideas from the history of physics, the history of music, the discipline of sound studies, and the history of electrical engineering to make the point that understanding “noise” is essential to understanding advancements in physics and engineering in the last century. We began with a discussion of 19th-century attitudes toward noise (and its association with “progress” and industry) before moving on to examine the early history of recorded sound and music, early attempts to measure noise, and the noise abatement movement. I concluded with a brief overview of my recent work on the role of noise in the development of the modem during the early Cold War.

You can find out more about Dr. Bullock who is an assistant professor of science education at Simon Fraser University here at his website.

On the subject of noise, although not directly related to Bullock’s work, there’s some research suggesting that noise may be having a serious impact on marine life. From an Oct. 8, 2015 Elsevier press release on EurekAlert,

Quiet areas should be sectioned off in the oceans to give us a better picture of the impact human generated noise is having on marine animals, according to a new study published in Marine Pollution Bulletin. By assigning zones through which ships cannot travel, researchers will be able to compare the behavior of animals in these quiet zones to those living in noisier areas, helping decide the best way to protect marine life from harmful noise.

The authors of the study, from the University of St Andrews, UK, the Oceans Initiative, Cornell University, USA, and Curtin University, Australia, say focusing on protecting areas that are still quiet will give researchers a better insight into the true impact we are having on the oceans.

Almost all marine organisms, including mammals like whales and dolphins, fish and even invertebrates, use sound to find food, avoid predators, choose mates and navigate. Chronic noise from human activities such as shipping can have a big impact on these animals, since it interferes with their acoustic signaling – increased background noise can mean animals are unable to hear important signals, and they tend to swim away from sources of noise, disrupting their normal behavior.

The number of ships in the oceans has increased fourfold since 1992, increasing marine noise dramatically. Ships are also getting bigger, and therefore noisier: in 2000 the biggest cargo ships could carry 8,000 containers; today’s biggest carry 18,000.

“Marine animals, especially whales, depend on a naturally quiet ocean for survival, but humans are polluting major portions of the ocean with noise,” said Dr. Christopher Clark from the Bioacoustics Research Program, Cornell University. “We must make every effort to protect quiet ocean regions now, before they grow too noisy from the din of our activities.”

For the new study, lead author Dr. Rob Williams and the team mapped out areas of high and low noise pollution in the oceans around Canada. Using shipping route and speed data from Environment Canada, the researchers put together a model of noise based on ships’ location, size and speed, calculating the cumulative sound they produce over the course of a year. They used the maps to predict how noisy they thought a particular area ought to be.

To test their predictions, in partnership with Cornell University, they deployed 12 autonomous hydrophones – devices that can measure noise in water – and found a correlation in terms of how the areas ranked from quietest to noisiest. The quiet areas are potential noise conservation zones.

“We tend to focus on problems in conservation biology. This was a fun study to work on, because we looked for opportunities to protect species by working with existing patterns in noise and animal distribution, and found that British Colombia offers many important habitat for whales that are still quiet,” said Dr. Rob Williams, lead author of the study. “If we think of quiet, wild oceans as a natural resource, we are lucky that Canada is blessed with globally rare pockets of acoustic wilderness. It makes sense to talk about protecting acoustic sanctuaries before we lose them.”

Although it is clear that noise has an impact on marine organisms, the exact effect is still not well understood. By changing their acoustic environment, we could be inadvertently choosing winners and losers in terms of survival; researchers are still at an early stage of predicting who will win or lose under different circumstances. The quiet areas the team identified could serve as experimental control sites for research like the International Quiet Ocean Experiment to see what effects ocean noise is having on marine life.

“Sound is perceived differently by different species, and some are more affected by noise than others,” said Christine Erbe, co-author of the study and Director of the Marine Science Center, Curtin University, Australia.

So far, the researchers have focused on marine mammals – whales, dolphins, porpoises, seals and sea lions. With a Pew Fellowship in Marine Conservation, Dr. Williams now plans to look at the effects of noise on fish, which are less well understood. By starting to quantify that and let people know what the likely economic effect on fisheries or on fish that are culturally important, Dr. Williams hopes to get the attention of the people who make decisions that affect ocean noise.

“When protecting highly mobile and migratory species that are poorly studied, it may make sense to focus on threats rather than the animals themselves. Shipping patterns decided by humans are often more predictable than the movements of whales and dolphins,” said Erin Ashe, co-author of the study and co-founder of the Oceans Initiative from the University of St Andrews.

Keeping areas of the ocean quiet is easier than reducing noise in already busy zones, say the authors of the study. However, if future research that stems from noise protected zones indicates that overall marine noise should be reduced, there are several possible approaches to reducing noise. The first is speed reduction: the faster a ship goes, the noisier it gets, so slowing down would reduce overall noise. The noisiest ships could also be targeted for replacement: by reducing the noise produced by the noisiest 10% of ships in use today, overall marine noise could be reduced by more than half. The third, more long-term, option would be to build quieter ships from the outset.

I can’t help wondering why Canadian scientists aren’t involved in this research taking place off our shores. Regardless, here’s a link to and a citation for the paper,

Quiet(er) marine protected areas by Rob Williams, Christine Erbe, Erin Ashe, & Christopher W. Clark. Marine Pollution Bulletin Available online 16 September 2015 In Press, Corrected Proof doi:10.1016/j.marpolbul.2015.09.012

This is an open access paper.