Tag Archives: Cornell University

Cornell University researchers breach blood-brain barrier

There are other teams working on ways to breach the blood-brain barrier (my March 26, 2015 post highlights work from a team at the University of Montréal) but this team from  Cornell is working with a drug that has already been approved by the US Food and Drug Administration (FDA) according to an April 8, 2016 news item on ScienceDaily,

Cornell researchers have discovered a way to penetrate the blood brain barrier (BBB) that may soon permit delivery of drugs directly into the brain to treat disorders such as Alzheimer’s disease and chemotherapy-resistant cancers.

The BBB is a layer of endothelial cells that selectively allow entry of molecules needed for brain function, such as amino acids, oxygen, glucose and water, while keeping others out.

Cornell researchers report that an FDA-approved drug called Lexiscan activates receptors — called adenosine receptors — that are expressed on these BBB cells.

An April 4, 2016 Cornell University news release by Krishna Ramanujan, which originated the news item, expands on the theme,

“We can open the BBB for a brief window of time, long enough to deliver therapies to the brain, but not too long so as to harm the brain. We hope in the future, this will be used to treat many types of neurological disorders,” said Margaret Bynoe, associate professor in the Department of Microbiology and Immunology in Cornell’s College of Veterinary Medicine. …

The researchers were able to deliver chemotherapy drugs into the brains of mice, as well as large molecules, like an antibody that binds to Alzheimer’s disease plaques, according to the paper.

To test whether this drug delivery system has application to the human BBB, the lab engineered a BBB model using human primary brain endothelial cells. They observed that Lexiscan opened the engineered BBB in a manner similar to its actions in mice.

Bynoe and Kim discovered that a protein called P-glycoprotein is highly expressed on brain endothelial cells and blocks the entry of most drugs delivered to the brain. Lexiscan acts on one of the adenosine receptors expressed on BBB endothelial cells specifically activating them. They showed that Lexiscan down-regulates P-glycoprotein expression and function on the BBB endothelial cells. It acts like a switch that can be turned on and off in a time dependent manner, which provides a measure of safety for the patient.

“We demonstrated that down-modulation of P-glycoprotein function coincides exquisitely with chemotherapeutic drug accumulation” in the brains of mice and across an engineered BBB using human endothelial cells, Bynoe said. “The amount of chemotherapeutic drugs that accumulated in the brain was significant.”

In addition to P-glycoprotein’s role in inhibiting foreign substances from penetrating the BBB, the protein is also expressed by many different types of cancers and makes these cancers resistant to chemotherapy.

“This finding has significant implications beyond modulation of the BBB,” Bynoe said. “It suggests that in the future, we may be able to modulate adenosine receptors to regulate P-glycoprotein in the treatment of cancer cells resistant to chemotherapy.”

Because Lexiscan is an FDA-approved drug, ”the potential for a breakthrough in drug delivery systems for diseases such as Alzheimer’s disease, Parkinson’s disease, autism, brain tumors and chemotherapy-resistant cancers is not far off,” Bynoe said.

Another advantage is that these molecules (adenosine receptors  and P-glycoprotein are naturally expressed in mammals. “We don’t have to knock out a gene or insert one for a therapy to work,” Bynoe said.

The study was funded by the National Institutes of Health and the Kwanjung Educational Foundation.

Here’s a link to and a citation for the paper,

A2A adenosine receptor modulates drug efflux transporter P-glycoprotein at the blood-brain barrier by Do-Geun Kim and Margaret S. Bynoe. J Clin Invest. doi:10.1172/JCI76207 First published April 4, 2016

Copyright © 2016, The American Society for Clinical Investigation.

This paper appears to be open access.

Using copyright to shut down easy access to scientific research

This started out as a simple post on copyright and publishers vis à vis Sci-Hub but then John Dupuis wrote a think piece (with which I disagree somewhat) on the situation in a Feb. 22, 2016 posting on his blog, Confessions of a Science Librarian. More on Dupuis and my take on it after a description of the situation.

Sci-Hub

Before getting to the controversy and legal suit, here’s a preamble about the purpose for copyright as per the US constitution from Mike Masnick’s Feb. 17, 2016 posting on Techdirt,

Lots of people are aware of the Constitutional underpinnings of our copyright system. Article 1, Section 8, Clause 8 famously says that Congress has the following power:

To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.

We’ve argued at great length over the importance of the preamble of that section, “to promote the progress,” but many people are confused about the terms “science” and “useful arts.” In fact, many people not well-versed in the issue often get the two backwards and think that “science” refers to inventions, and thus enables a patent system, while “useful arts” refers to “artistic works” and thus enables the copyright system. The opposite is actually the case. “Science” at the time the Constitution was written was actually synonymous with “learning” and “education” (while “useful arts” was a term meaning invention and new productivity tools).

While over the centuries, many who stood to benefit from an aggressive system of copyright control have tried to rewrite, whitewash or simply ignore this history, turning the copyright system falsely into a “property” regime, the fact is that it was always intended as a system to encourage the wider dissemination of ideas for the purpose of education and learning. The (potentially misguided) intent appeared to be that by granting exclusive rights to a certain limited class of works, it would encourage the creation of those works, which would then be useful in educating the public (and within a few decades enter the public domain).

Masnick’s preamble leads to a case where Elsevier (Publishers) has attempted to halt the very successful Sci-Hub, which bills itself as “the first pirate website in the world to provide mass and public access to tens of millions of research papers.” From Masnick’s Feb. 17, 2016 posting,

Rightfully, this is being celebrated as a massive boon to science and learning, making these otherwise hidden nuggets of knowledge and science that were previously locked up and hidden away available to just about anyone. And, to be clear, this absolutely fits with the original intent of copyright law — which was to encourage such learning. In a very large number of cases, it is not the creators of this content and knowledge who want the information to be locked up. Many researchers and academics know that their research has much more of an impact the wider it is seen, read, shared and built upon. But the gatekeepers — such as Elsveier and other large academic publishers — have stepped in and demanded copyright, basically for doing very little.

They do not pay the researchers for their work. Often, in fact, that work is funded by taxpayer funds. In some cases, in certain fields, the publishers actually demand that the authors of these papers pay to submit them. The journals do not pay to review the papers either. They outsource that work to other academics for “peer review” — which again, is unpaid. Finally, these publishers profit massively, having convinced many universities that they need to subscribe, often paying many tens or even hundreds of thousands of dollars for subscriptions to journals that very few actually read.

Simon Oxenham of the Neurobonkers blog on the big think website wrote a Feb. 9 (?), 2016 post about Sci-Hub, its originator, and its current legal fight (Note: Links have been removed),

On September 5th, 2011, Alexandra Elbakyan, a researcher from Kazakhstan, created Sci-Hub, a website that bypasses journal paywalls, illegally providing access to nearly every scientific paper ever published immediately to anyone who wants it. …

This was a game changer. Before September 2011, there was no way for people to freely access paywalled research en masse; researchers like Elbakyan were out in the cold. Sci-Hub is the first website to offer this service and now makes the process as simple as the click of a single button.

As the number of papers in the LibGen database expands, the frequency with which Sci-Hub has to dip into publishers’ repositories falls and consequently the risk of Sci-Hub triggering its alarm bells becomes ever smaller. Elbakyan explains, “We have already downloaded most paywalled articles to the library … we have almost everything!” This may well be no exaggeration. Elsevier, one of the most prolific and controversial scientific publishers in the world, recently alleged in court that Sci-Hub is currently harvesting Elsevier content at a rate of thousands of papers per day. Elbakyan puts the number of papers downloaded from various publishers through Sci-Hub in the range of hundreds of thousands per day, delivered to a running total of over 19 million visitors.

In one fell swoop, a network has been created that likely has a greater level of access to science than any individual university, or even government for that matter, anywhere in the world. Sci-Hub represents the sum of countless different universities’ institutional access — literally a world of knowledge. This is important now more than ever in a world where even Harvard University can no longer afford to pay skyrocketing academic journal subscription fees, while Cornell axed many of its Elsevier subscriptions over a decade ago. For researchers outside the US’ and Western Europe’s richest institutions, routine piracy has long been the only way to conduct science, but increasingly the problem of unaffordable journals is coming closer to home.

… This was the experience of Elbakyan herself, who studied in Kazakhstan University and just like other students in countries where journal subscriptions are unaffordable for institutions, was forced to pirate research in order to complete her studies. Elbakyan told me, “Prices are very high, and that made it impossible to obtain papers by purchasing. You need to read many papers for research, and when each paper costs about 30 dollars, that is impossible.”

While Sci-Hub is not expected to win its case in the US, where one judge has already ordered a preliminary injunction making its former domain unavailable. (Sci-Hub moved.) Should you be sympathetic to Elsevier, you may want to take this into account (Note: Links have been removed),

Elsevier is the world’s largest academic publisher and by far the most controversial. Over 15,000 researchers have vowed to boycott the publisher for charging “exorbitantly high prices” and bundling expensive, unwanted journals with essential journals, a practice that allegedly is bankrupting university libraries. Elsevier also supports SOPA and PIPA, which the researchers claim threatens to restrict the free exchange of information. Elsevier is perhaps most notorious for delivering takedown notices to academics, demanding them to take their own research published with Elsevier off websites like Academia.edu.

The movement against Elsevier has only gathered speed over the course of the last year with the resignation of 31 editorial board members from the Elsevier journal Lingua, who left in protest to set up their own open-access journal, Glossa. Now the battleground has moved from the comparatively niche field of linguistics to the far larger field of cognitive sciences. Last month, a petition of over 1,500 cognitive science researchers called on the editors of the Elsevier journal Cognition to demand Elsevier offer “fair open access”. Elsevier currently charges researchers $2,150 per article if researchers wish their work published in Cognition to be accessible by the public, a sum far higher than the charges that led to the Lingua mutiny.

In her letter to Sweet [New York District Court Judge Robert W. Sweet], Elbakyan made a point that will likely come as a shock to many outside the academic community: Researchers and universities don’t earn a single penny from the fees charged by publishers [emphasis mine] such as Elsevier for accepting their work, while Elsevier has an annual income over a billion U.S. dollars.

As Masnick noted, much of this research is done on the public dime (i. e., funded by taxpayers). For her part, Elbakyan has written a letter defending her actions on ethical rather than legal grounds.

I recommend reading the Oxenham article as it provides details about how the site works and includes text from the letter Elbakyan wrote.  For those who don’t have much time, Masnick’s post offers a good précis.

Sci-Hub suit as a distraction from the real issues?

Getting to Dupuis’ Feb. 22, 2016 posting and his perspective on the situation,

My take? Mostly that it’s a sideshow.

One aspect that I have ranted about on Twitter which I think is worth mentioning explicitly is that I think Elsevier and all the other big publishers are actually quite happy to feed the social media rage machine with these whack-a-mole controversies. The controversies act as a sideshow, distracting from the real issues and solutions that they would prefer all of us not to think about.

By whack-a-mole controversies I mean this recurring story of some person or company or group that wants to “free” scholarly articles and then gets sued or harassed by the big publishers or their proxies to force them to shut down. This provokes wide outrage and condemnation aimed at the publishers, especially Elsevier who is reserved a special place in hell according to most advocates of openness (myself included).

In other words: Elsevier and its ilk are thrilled to be the target of all the outrage. Focusing on the whack-a-mole game distracts us from fixing the real problem: the entrenched systems of prestige, incentive and funding in academia. As long as researchers are channelled into “high impact” journals, as long as tenure committees reward publishing in closed rather than open venues, nothing will really change. Until funders get serious about mandating true open access publishing and are willing to put their money where their intentions are, nothing will change. Or at least, progress will be mostly limited to surface victories rather than systemic change.

I think Dupuis is referencing a conflict theory (I can’t remember what it’s called) which suggests that certain types of conflicts help to keep systems in place while apparently attacking those systems. His point is well made but I disagree somewhat in that I think these conflicts can also raise awareness and activate people who might otherwise ignore or mindlessly comply with those systems. So, if Elsevier and the other publishers are using these legal suits as diversionary tactics, they may find they’ve made a strategic error.

ETA April 29, 2016: Sci-Hub does seem to move around so I’ve updated the links so it can be accessed but Sci-Hub’s situation can change at any moment.

Humans, computers, and a note of optimism

As an* antidote to my Jan. 4*, 2016 post titled: Nanotechnology and cybersecurity risks and if you’re looking to usher in 2016 on a hopeful note, this Dec. 31, 2015 Human Computation Institute news release on EurekAlert is very timely,

The combination of human and computer intelligence might be just what we need to solve the “wicked” problems of the world, such as climate change and geopolitical conflict, say researchers from the Human Computation Institute (HCI) and Cornell University.

In an article published in the journal Science, the authors present a new vision of human computation (the science of crowd-powered systems), which pushes beyond traditional limits, and takes on hard problems that until recently have remained out of reach.

Humans surpass machines at many things, ranging from simple pattern recognition to creative abstraction. With the help of computers, these cognitive abilities can be effectively combined into multidimensional collaborative networks that achieve what traditional problem-solving cannot.

Most of today’s human computation systems rely on sending bite-sized ‘micro-tasks’ to many individuals and then stitching together the results. For example, 165,000 volunteers in EyeWire have analyzed thousands of images online to help build the world’s most complete map of human retinal neurons.

This microtasking approach alone cannot address the tough challenges we face today, say the authors. A radically new approach is needed to solve “wicked problems” – those that involve many interacting systems that are constantly changing, and whose solutions have unforeseen consequences (e.g., corruption resulting from financial aid given in response to a natural disaster).

New human computation technologies can help. Recent techniques provide real-time access to crowd-based inputs, where individual contributions can be processed by a computer and sent to the next person for improvement or analysis of a different kind. This enables the construction of more flexible collaborative environments that can better address the most challenging issues.

This idea is already taking shape in several human computation projects, including YardMap.org, which was launched by the Cornell in 2012 to map global conservation efforts one parcel at a time.

“By sharing and observing practices in a map-based social network, people can begin to relate their individual efforts to the global conservation potential of living and working landscapes,” says Janis Dickinson, Professor and Director of Citizen Science at the Cornell Lab of Ornithology.

YardMap allows participants to interact and build on each other’s work – something that crowdsourcing alone cannot achieve. The project serves as an important model for how such bottom-up, socially networked systems can bring about scalable changes how we manage residential landscapes.

HCI has recently set out to use crowd-power to accelerate Cornell-based Alzheimer’s disease research. WeCureAlz.com combines two successful microtasking systems into an interactive analytic pipeline that builds blood flow models of mouse brains. The stardust@home system, which was used to search for comet dust in one million images of aerogel, is being adapted to identify stalled blood vessels, which will then be pinpointed in the brain by a modified version of the EyeWire system.

“By enabling members of the general public to play some simple online game, we expect to reduce the time to treatment discovery from decades to just a few years”, says HCI director and lead author, Dr. Pietro Michelucci. “This gives an opportunity for anyone, including the tech-savvy generation of caregivers and early stage AD patients, to take the matter into their own hands.”

Here’s a link to and a citation for the paper,

Human Computation; The power of crowds by Pietro Michelucci, and Janis L. Dickinson. Science 1 January 2016: Vol. 351 no. 6268 pp. 32-33 DOI: 10.1126/science.aad6499

This paper is behind a paywall but the abstract is freely available,

Human computation, a term introduced by Luis von Ahn (1), refers to distributed systems that combine the strengths of humans and computers to accomplish tasks that neither can do alone (2). The seminal example is reCAPTCHA, a Web widget used by 100 million people a day when they transcribe distorted text into a box to prove they are human. This free cognitive labor provides users with access to Web content and keeps websites safe from spam attacks, while feeding into a massive, crowd-powered transcription engine that has digitized 13 million articles from The New York Times archives (3). But perhaps the best known example of human computation is Wikipedia. Despite initial concerns about accuracy (4), it has become the key resource for all kinds of basic information. Information science has begun to build on these early successes, demonstrating the potential to evolve human computation systems that can model and address wicked problems (those that defy traditional problem-solving methods) at the intersection of economic, environmental, and sociopolitical systems.

*’and’ changed to ‘an’ and ‘Jan. 3, 2016’ changed to ‘Jan. 4, 2016’ on Jan. 4, 2016 at 1543 PDT.

Clues as to how mother of pearl is made

Iridescence seems to fascinate scientists and a team at Cornell University is no exception (from a Dec. 4, 2015 news item on Nanowerk),

Mother nature has a lot to teach us about how to make things.

With that in mind, Cornell researchers have uncovered the process by which mollusks manufacture nacre – commonly known as “mother of pearl.” Along with its iridescent beauty, this material found on the insides of seashells is incredibly strong. Knowing how it’s made could lead to new methods to synthesize a variety of new materials with as yet unguessed properties.

“We have all these high-tech facilities to make new materials, but just take a walk along the beach and see what’s being made,” said postdoctoral research associate Robert Hovden, M.S. ’10, Ph.D. ’14. “Nature is doing incredible nanoscience, and we need to dig into it.”

A Dec. 4, 2015 Cornell University news release by Bill Steele, which originated the news item, expands on the theme,

Using a high-resolution scanning transmission electron microscope (STEM), the researchers examined a cross section of the shell of a large Mediterranean mollusk called the noble pen shell or fan mussel (Pinna nobilis). To make the observations possible they had to develop a special sample preparation process. Using a diamond saw, they cut a thin slice through the shell, then in effect sanded it down with a thin film in which micron-sized bits of diamond were embedded, until they had a sample less than 30 nanometers thick, suitable for STEM observation. As in sanding wood, they moved from heavier grits for fast cutting to a fine final polish to make a surface free of scratches that might distort the STEM image.

Images with nanometer-scale resolution revealed that the organism builds nacre by depositing a series of layers of a material containing nanoparticles of calcium carbonate. Moving from the inside out, these particles are seen coming together in rows and fusing into flat crystals laminated between layers of organic material. (The layers are thinner than the wavelengths of visible light, causing the scattering that gives the material its iridescence.)

Exactly what happens at each step is a topic for future research. For now, the researchers said in their paper, “We cannot go back in time” to observe the process. But knowing that nanoparticles are involved is a valuable insight for materials scientists, Hovden said.

Here’s an image from the researchers,

Electron microscope image of a cross-section of a mollusk shell. The organism builds its shell from the inside out by depositing layers of calcium carbonate nanoparticles. As the particle density increases over time they fuse into large flat crystals embedded in layers of organic material to form nacre. Courtesy: Cornell University

Electron microscope image of a cross-section of a mollusk shell. The organism builds its shell from the inside out by depositing layers of calcium carbonate nanoparticles. As the particle density increases over time they fuse into large flat crystals embedded in layers of organic material to form nacre. Courtesy: Cornell University

Here’s a link to and a citation for the paper,

Nanoscale assembly processes revealed in the nacroprismatic transition zone of Pinna nobilis mollusc shells by Robert Hovden, Stephan E. Wolf, Megan E. Holtz, Frédéric Marin, David A. Muller, & Lara A. Estroff. Nature Communications 6, Article number: 10097 doi:10.1038/ncomms10097 Published 03 December 2015

This is an open access paper.

Café Scientifique (Vancouver, Canada) and noise on Oct. 27, 2015

On Tuesday, October 27, 2015  Café Scientifique, in the back room of The Railway Club (2nd floor of 579 Dunsmuir St. [at Seymour St.]), will be hosting a talk on the history of noise (from the Oct. 13, 2015 announcement),

Our speaker for the evening will be Dr. Shawn Bullock.  The title of his talk is:

The History of Noise: Perspectives from Physics and Engineering

The word “noise” is often synonymous with “nuisance,” which implies something to be avoided as much as possible. We label blaring sirens, the space between stations on the radio dial and the din of a busy street as “noise.” Is noise simply a sound we don’t like? We will consider the evolution of how scientists and engineers have thought about noise, beginning in the 19th-century and continuing to the present day. We will explore the idea of noise both as a social construction and as a technological necessity. We’ll also touch on critical developments in the study of sound, the history of physics and engineering, and the development of communications technology.

This description is almost identical to the description Bullock gave for a November 2014 talk he titled: Snap, Crackle, Pop!: A Short History of Noise which he summarizes this way after delivering the talk,

I used ideas from the history of physics, the history of music, the discipline of sound studies, and the history of electrical engineering to make the point that understanding “noise” is essential to understanding advancements in physics and engineering in the last century. We began with a discussion of 19th-century attitudes toward noise (and its association with “progress” and industry) before moving on to examine the early history of recorded sound and music, early attempts to measure noise, and the noise abatement movement. I concluded with a brief overview of my recent work on the role of noise in the development of the modem during the early Cold War.

You can find out more about Dr. Bullock who is an assistant professor of science education at Simon Fraser University here at his website.

On the subject of noise, although not directly related to Bullock’s work, there’s some research suggesting that noise may be having a serious impact on marine life. From an Oct. 8, 2015 Elsevier press release on EurekAlert,

Quiet areas should be sectioned off in the oceans to give us a better picture of the impact human generated noise is having on marine animals, according to a new study published in Marine Pollution Bulletin. By assigning zones through which ships cannot travel, researchers will be able to compare the behavior of animals in these quiet zones to those living in noisier areas, helping decide the best way to protect marine life from harmful noise.

The authors of the study, from the University of St Andrews, UK, the Oceans Initiative, Cornell University, USA, and Curtin University, Australia, say focusing on protecting areas that are still quiet will give researchers a better insight into the true impact we are having on the oceans.

Almost all marine organisms, including mammals like whales and dolphins, fish and even invertebrates, use sound to find food, avoid predators, choose mates and navigate. Chronic noise from human activities such as shipping can have a big impact on these animals, since it interferes with their acoustic signaling – increased background noise can mean animals are unable to hear important signals, and they tend to swim away from sources of noise, disrupting their normal behavior.

The number of ships in the oceans has increased fourfold since 1992, increasing marine noise dramatically. Ships are also getting bigger, and therefore noisier: in 2000 the biggest cargo ships could carry 8,000 containers; today’s biggest carry 18,000.

“Marine animals, especially whales, depend on a naturally quiet ocean for survival, but humans are polluting major portions of the ocean with noise,” said Dr. Christopher Clark from the Bioacoustics Research Program, Cornell University. “We must make every effort to protect quiet ocean regions now, before they grow too noisy from the din of our activities.”

For the new study, lead author Dr. Rob Williams and the team mapped out areas of high and low noise pollution in the oceans around Canada. Using shipping route and speed data from Environment Canada, the researchers put together a model of noise based on ships’ location, size and speed, calculating the cumulative sound they produce over the course of a year. They used the maps to predict how noisy they thought a particular area ought to be.

To test their predictions, in partnership with Cornell University, they deployed 12 autonomous hydrophones – devices that can measure noise in water – and found a correlation in terms of how the areas ranked from quietest to noisiest. The quiet areas are potential noise conservation zones.

“We tend to focus on problems in conservation biology. This was a fun study to work on, because we looked for opportunities to protect species by working with existing patterns in noise and animal distribution, and found that British Colombia offers many important habitat for whales that are still quiet,” said Dr. Rob Williams, lead author of the study. “If we think of quiet, wild oceans as a natural resource, we are lucky that Canada is blessed with globally rare pockets of acoustic wilderness. It makes sense to talk about protecting acoustic sanctuaries before we lose them.”

Although it is clear that noise has an impact on marine organisms, the exact effect is still not well understood. By changing their acoustic environment, we could be inadvertently choosing winners and losers in terms of survival; researchers are still at an early stage of predicting who will win or lose under different circumstances. The quiet areas the team identified could serve as experimental control sites for research like the International Quiet Ocean Experiment to see what effects ocean noise is having on marine life.

“Sound is perceived differently by different species, and some are more affected by noise than others,” said Christine Erbe, co-author of the study and Director of the Marine Science Center, Curtin University, Australia.

So far, the researchers have focused on marine mammals – whales, dolphins, porpoises, seals and sea lions. With a Pew Fellowship in Marine Conservation, Dr. Williams now plans to look at the effects of noise on fish, which are less well understood. By starting to quantify that and let people know what the likely economic effect on fisheries or on fish that are culturally important, Dr. Williams hopes to get the attention of the people who make decisions that affect ocean noise.

“When protecting highly mobile and migratory species that are poorly studied, it may make sense to focus on threats rather than the animals themselves. Shipping patterns decided by humans are often more predictable than the movements of whales and dolphins,” said Erin Ashe, co-author of the study and co-founder of the Oceans Initiative from the University of St Andrews.

Keeping areas of the ocean quiet is easier than reducing noise in already busy zones, say the authors of the study. However, if future research that stems from noise protected zones indicates that overall marine noise should be reduced, there are several possible approaches to reducing noise. The first is speed reduction: the faster a ship goes, the noisier it gets, so slowing down would reduce overall noise. The noisiest ships could also be targeted for replacement: by reducing the noise produced by the noisiest 10% of ships in use today, overall marine noise could be reduced by more than half. The third, more long-term, option would be to build quieter ships from the outset.

I can’t help wondering why Canadian scientists aren’t involved in this research taking place off our shores. Regardless, here’s a link to and a citation for the paper,

Quiet(er) marine protected areas by Rob Williams, Christine Erbe, Erin Ashe, & Christopher W. Clark. Marine Pollution Bulletin Available online 16 September 2015 In Press, Corrected Proof doi:10.1016/j.marpolbul.2015.09.012

This is an open access paper.

A soft heart from Cornell University (US)

Caption: This is an artificial foam heart created by Rob Shepherd and his engineering team at Cornell University. Credit: Cornell University

Caption: This is an artificial foam heart created by Rob Shepherd and his engineering team at Cornell University.
Credit: Cornell University

It’s not exactly what I imagined on seeing the words “foam heart” but this is what researchers at Cornell University have produced as a ‘working concept’. From an Oct. 14, 2015 Cornell University news release (also on EurekAlert but dated Oct. 15, 2015) describes the research in more detail,

Cornell University researchers have developed a new lightweight and stretchable material with the consistency of memory foam that has potential for use in prosthetic body parts, artificial organs and soft robotics. The foam is unique because it can be formed and has connected pores that allow fluids to be pumped through it.

The polymer foam starts as a liquid that can be poured into a mold to create shapes, and because of the pathways for fluids, when air or liquid is pumped through it, the material moves and can change its length by 300 percent.

While applications for use inside the body require federal approval and testing, Cornell researchers are close to making prosthetic body parts with the so-called “elastomer foam.”

“We are currently pretty far along for making a prosthetic hand this way,” said Rob Shepherd, assistant professor of mechanical and aerospace engineering, and senior author of a paper appearing online and in an upcoming issue of the journal Advanced Materials. Benjamin Mac Murray, a graduate student in Shepherd’s lab, is the paper’s first author.

In the paper, the researchers demonstrated a pump they made into a heart, mimicking both shape and function.

The researchers used carbon fiber and silicone on the outside to fashion a structure that expands at different rates on the surface – to make a spherical shape into an egg shape, for example, that would hold its form when inflated.

“This paper was about exploring the effect of porosity on the actuator, but now we would like to make the foam actuators faster and with higher strength, so we can apply more force. We are also focusing on biocompatibility,” Shepherd said.

Cornell has made a video of researcher Rob Shepherd describing the work,

Here’s a link to and a citation for the paper,

Poroelastic Foams for Simple Fabrication of Complex Soft Robots by Benjamin C. Mac Murray, Xintong An, Sanlin S. Robinson, Ilse M. van Meerbeek, Kevin W. O’Brien, Huichan Zhao, andRobert F. Shepherd. Advanced Materials DOI: 10.1002/adma.201503464 Article first published online: 19 SEP 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

$81M for US National Nanotechnology Coordinated Infrastructure (NNCI)

Academics, small business, and industry researchers are the big winners in a US National Science Foundation bonanza according to a Sept. 16, 2015 news item on Nanowerk,

To advance research in nanoscale science, engineering and technology, the National Science Foundation (NSF) will provide a total of $81 million over five years to support 16 sites and a coordinating office as part of a new National Nanotechnology Coordinated Infrastructure (NNCI).

The NNCI sites will provide researchers from academia, government, and companies large and small with access to university user facilities with leading-edge fabrication and characterization tools, instrumentation, and expertise within all disciplines of nanoscale science, engineering and technology.

A Sept. 16, 2015 NSF news release provides a brief history of US nanotechnology infrastructures and describes this latest effort in slightly more detail (Note: Links have been removed),

The NNCI framework builds on the National Nanotechnology Infrastructure Network (NNIN), which enabled major discoveries, innovations, and contributions to education and commerce for more than 10 years.

“NSF’s long-standing investments in nanotechnology infrastructure have helped the research community to make great progress by making research facilities available,” said Pramod Khargonekar, assistant director for engineering. “NNCI will serve as a nationwide backbone for nanoscale research, which will lead to continuing innovations and economic and societal benefits.”

The awards are up to five years and range from $500,000 to $1.6 million each per year. Nine of the sites have at least one regional partner institution. These 16 sites are located in 15 states and involve 27 universities across the nation.

Through a fiscal year 2016 competition, one of the newly awarded sites will be chosen to coordinate the facilities. This coordinating office will enhance the sites’ impact as a national nanotechnology infrastructure and establish a web portal to link the individual facilities’ websites to provide a unified entry point to the user community of overall capabilities, tools and instrumentation. The office will also help to coordinate and disseminate best practices for national-level education and outreach programs across sites.

New NNCI awards:

Mid-Atlantic Nanotechnology Hub for Research, Education and Innovation, University of Pennsylvania with partner Community College of Philadelphia, principal investigator (PI): Mark Allen
Texas Nanofabrication Facility, University of Texas at Austin, PI: Sanjay Banerjee

Northwest Nanotechnology Infrastructure, University of Washington with partner Oregon State University, PI: Karl Bohringer

Southeastern Nanotechnology Infrastructure Corridor, Georgia Institute of Technology with partners North Carolina A&T State University and University of North Carolina-Greensboro, PI: Oliver Brand

Midwest Nano Infrastructure Corridor, University of  Minnesota Twin Cities with partner North Dakota State University, PI: Stephen Campbell

Montana Nanotechnology Facility, Montana State University with partner Carlton College, PI: David Dickensheets
Soft and Hybrid Nanotechnology Experimental Resource,

Northwestern University with partner University of Chicago, PI: Vinayak Dravid

The Virginia Tech National Center for Earth and Environmental Nanotechnology Infrastructure, Virginia Polytechnic Institute and State University, PI: Michael Hochella

North Carolina Research Triangle Nanotechnology Network, North Carolina State University with partners Duke University and University of North Carolina-Chapel Hill, PI: Jacob Jones

San Diego Nanotechnology Infrastructure, University of California, San Diego, PI: Yu-Hwa Lo

Stanford Site, Stanford University, PI: Kathryn Moler

Cornell Nanoscale Science and Technology Facility, Cornell University, PI: Daniel Ralph

Nebraska Nanoscale Facility, University of Nebraska-Lincoln, PI: David Sellmyer

Nanotechnology Collaborative Infrastructure Southwest, Arizona State University with partners Maricopa County Community College District and Science Foundation Arizona, PI: Trevor Thornton

The Kentucky Multi-scale Manufacturing and Nano Integration Node, University of Louisville with partner University of Kentucky, PI: Kevin Walsh

The Center for Nanoscale Systems at Harvard University, Harvard University, PI: Robert Westervelt

The universities are trumpeting this latest nanotechnology funding,

NSF-funded network set to help businesses, educators pursue nanotechnology innovation (North Carolina State University, Duke University, and University of North Carolina at Chapel Hill)

Nanotech expertise earns Virginia Tech a spot in National Science Foundation network

ASU [Arizona State University] chosen to lead national nanotechnology site

UChicago, Northwestern awarded $5 million nanotechnology infrastructure grant

That is a lot of excitement.

A pragmatic approach to alternatives to animal testing

Retitled and cross-posted from the June 30, 2015 posting (Testing times: the future of animal alternatives) on the International Innovation blog (a CORDIS-listed project dissemination partner for FP7 and H2020 projects).

Maryse de la Giroday explains how emerging innovations can provide much-needed alternatives to animal testing. She also shares highlights of the 9th World Congress on Alternatives to Animal Testing.

‘Guinea pigging’ is the practice of testing drugs that have passed in vitro and in vivo tests on healthy humans in a Phase I clinical trial. In fact, healthy humans can make quite a bit of money as guinea pigs. The practice is sufficiently well-entrenched that there is a magazine, Guinea Pig Zero, devoted to professionals. While most participants anticipate some unpleasant side effects, guinea pigging can sometimes be a dangerous ‘profession’.

HARMFUL TO HEALTH

One infamous incident highlighting the dangers of guinea pigging occurred in 2006 at Northwick Park Hospital outside London. Volunteers were offered £2,000 to participate in a Phase I clinical trial to test a prospective treatment – a monoclonal antibody designed for rheumatoid arthritis and multiple sclerosis. The drug, called TGN1412, caused catastrophic systemic organ failure in participants. All six individuals receiving the drug required hospital treatment. One participant reportedly underwent amputation of fingers and toes. Another reacted with symptoms comparable to John Merrick, the Elephant Man.

The root of the disaster lay in subtle immune system differences between humans and cynomolgus monkeys – the model animal tested prior to the clinical trial. The drug was designed for the CD28 receptor on T cells. The monkeys’ receptors closely resemble those found in humans. However, unlike these monkeys, humans have other immune cells that carry CD28. The trial participants received a starting dosage that was 0.2 per cent of what the monkeys received in their final tests, but failure to take these additional receptors into account meant a dosage that was supposed to occupy 10 per cent of the available CD28 receptors instead occupied 90 per cent. After the event, a Russian inventor purchased the commercial rights to the drug and renamed it TAB08. It has been further developed by Russian company, TheraMAB, and TAB08 is reportedly in Phase II clinical trials.

HUMAN-ON-A-CHIP AND ORGANOID PROJECTS

While animal testing has been a powerful and useful tool for determining safe usage for pharmaceuticals and other types of chemicals, it is also a cruel and imperfect practice. Moreover, it typically only predicts 30-60 per cent of human responses to new drugs. Nanotechnology and other emerging innovations present possibilities for reducing, and in some cases eliminating, the use of animal models.

People for the Ethical Treatment of Animals (PETA), still better known for its publicity stunts, maintains a webpage outlining a number of alternatives including in silico testing (computer modelling), and, perhaps most interestingly, human-on-a-chip and organoid (tissue engineering) projects.

Organ-on-a-chip projects use stem cells to create human tissues that replicate the functions of human organs. Discussions about human-on-a-chip activities – a phrase used to describe 10 interlinked organ chips – were a highlight of the 9th World Congress on Alternatives to Animal Testing held in Prague, Czech Republic, last year. One project highlighted at the event was a joint US National Institutes of Health (NIH), US Food and Drug Administration (FDA) and US Defense Advanced Research Projects Agency (DARPA) project led by Dan Tagle that claimed it would develop functioning human-on-a-chip by 2017. However, he and his team were surprisingly close-mouthed and provided few details making it difficult to assess how close they are to achieving their goal.

By contrast, Uwe Marx – Leader of the ‘Multi-Organ-Chip’ programme in the Institute of Biotechnology at the Technical University of Berlin and Scientific Founder of TissUse, a human-on-a-chip start-up company – claims to have sold two-organ chips. He also claims to have successfully developed a four-organ chip and that he is on his way to building a human-on-a-chip. Though these chips remain to be seen, if they are, they will integrate microfluidics, cultured cells and materials patterned at the nanoscale to mimic various organs, and will allow chemical testing in an environment that somewhat mirrors a human.

Another interesting alternative for animal testing is organoids – a feature in regenerative medicine that can function as test sites. Engineers based at Cornell University recently published a paper on their functional, synthetic immune organ. Inspired by the lymph node, the organoid is comprised of gelatin-based biomaterials, which are reinforced with silicate nanoparticles (to keep the tissue from melting when reaching body temperature) and seeded with cells allowing it to mimic the anatomical microenvironment of a lymphatic node. It behaves like its inspiration converting B cells to germinal centres which activate, mature and mutate antibody genes when the body is under attack. The engineers claim to be able to control the immune response and to outperform 2D cultures with their 3D organoid. If the results are reproducible, the organoid could be used to develop new therapeutics.

Maryse de la Giroday is a science communications consultant and writer.

Full disclosure: Maryse de la Giroday received transportation and accommodation for the 9th World Congress on Alternatives to Animal Testing from SEURAT-1, a European Union project, making scientific inquiries to facilitate the transition to animal testing alternatives, where possible.

ETA July 1, 2015: I would like to acknowledge more sources for the information in this article,

Sources:

The guinea pigging term, the ‘professional aspect, the Northwick Park story, and the Guinea Pig Zero magazine can be found in Carl Elliot’s excellent 2006 story titled ‘Guinea-Pigging’ for New Yorker magazine.

http://www.newyorker.com/magazine/2008/01/07/guinea-pigging

Information about the drug used in the Northwick Park Hospital disaster, the sale of the rights to a Russian inventor, and the June 2015 date for the current Phase II clinical trials were found in this Wikipedia essay titled, TGN 1412.

http://en.wikipedia.org/wiki/TGN1412

Additional information about the renamed drug, TAB08 and its Phase II clinical trials was found on (a) a US government website for information on clinical trials, (b) in a Dec. 2014 (?) TheraMAB  advertisement in a Nature group magazine and a Jan. 2014 press release,

https://www.clinicaltrials.gov/ct2/show/NCT01990157?term=TAB08_RA01&rank=1

http://www.theramab.ru/TheraMAB_NAture.pdf

http://theramab.ru/en/news/phase_II

An April 2015 article (Experimental drug that injured UK volunteers resumes in human trials) by Owen Dyer for the British Medical Journal also mentioned the 2015 TheraMab Phase II clinical trials and provided information about the information about Macaque (cynomolgus) monkey tests.

http://www.bmj.com.proxy.lib.sfu.ca/content/350/bmj.h1831

BMJ 2015; 350 doi: http://dx.doi.org.proxy.lib.sfu.ca/10.1136/bmj.h1831 (Published 02 April 2015) Cite this as: BMJ 2015;350:h1831

A 2009 study by Christopher Horvath and Mark Milton somewhat contradicts the Dyer article’s contention that a species Macaque monkey was used as an animal model. (As the Dyer article is more recent and the Horvath/Milton analysis is more complex covering TGN 1412 in the context of other MAB drugs and their precursor tests along with specific TGN 1412 tests, I opted for the simple description.)

The TeGenero Incident [another name for the Northwick Park Accident] and the Duff Report Conclusions: A Series of Unfortunate Events or an Avoidable Event? by Christopher J. Horvath and Mark N. Milton. Published online before print February 24, 2009, doi: 10.1177/0192623309332986 Toxicol Pathol April 2009 vol. 37 no. 3 372-383

http://tpx.sagepub.com/content/37/3/372.full

Philippa Roxbuy’s May 24, 2013 BBC news online article provided confirmation and an additional detail or two about the Northwick Park Hospital accident. It notes that other models, in addition to animal models, are being developed.

http://www.bbc.com/news/health-22556736

Anne Ju’s excellent June 10,2015 news release about the Cornell University organoid (synthetic immune organ) project was very helpful.

http://www.news.cornell.edu/stories/2015/06/engineers-synthetic-immune-organ-produces-antibodies

There will also be a magazine article in International Innovation, which will differ somewhat from the blog posting, due to editorial style and other requirements.

ETA July 22, 2015: I now have a link to the magazine article.

Cornell University’s (US) immune organoid

A synthetic immune organ that produces antibodies has been developed at Cornell University. From a June 11, 2015 news item on Azonano,

Cornell engineers have created a functional, synthetic immune organ that produces antibodies and can be controlled in the lab, completely separate from a living organism. The engineered organ has implications for everything from rapid production of immune therapies to new frontiers in cancer or infectious disease research.

The immune organoid was created in the lab of Ankur Singh, assistant professor of mechanical and aerospace engineering, who applies engineering principles to the study and manipulation of the human immune system. …

A June 10, 2015 Cornell University news release (also on EurekAlert) by Anne Ju, which originated the news item, describes how the organ/organoid functions,

The synthetic organ is bio-inspired by secondary immune organs like the lymph node or spleen. It is made from gelatin-based biomaterials reinforced with nanoparticles and seeded with cells, and it mimics the anatomical microenvironment of lymphoid tissue. Like a real organ, the organoid converts B cells – which make antibodies that respond to infectious invaders – into germinal centers, which are clusters of B cells that activate, mature and mutate their antibody genes when the body is under attack. Germinal centers are a sign of infection and are not present in healthy immune organs.

The engineers have demonstrated how they can control this immune response in the organ and tune how quickly the B cells proliferate, get activated and change their antibody types. According to their paper, their 3-D organ outperforms existing 2-D cultures and can produce activated B cells up to 100 times faster.

The immune organ, made of a hydrogel, is a soft, nanocomposite biomaterial. The engineers reinforced the material with silicate nanoparticles to keep the structure from melting at the physiologically relevant temperature of 98.6 degrees.

The organ could lead to increased understanding of B cell functions, an area of study that typically relies on animal models to observe how the cells develop and mature.

What’s more, Singh said, the organ could be used to study specific infections and how the body produces antibodies to fight those infections – from Ebola to HIV.

“You can use our system to force the production of immunotherapeutics at much faster rates,” he said. Such a system also could be used to test toxic chemicals and environmental factors that contribute to infections or organ malfunctions.

The process of B cells becoming germinal centers is not well understood, and in fact, when the body makes mistakes in the genetic rearrangement related to this process, blood cancer can result.

“In the long run, we anticipate that the ability to drive immune reaction ex vivo at controllable rates grants us the ability to reproduce immunological events with tunable parameters for better mechanistic understanding of B cell development and generation of B cell tumors, as well as screening and translation of new classes of drugs,” Singh said.

The researchers have provided an image of their work,

When exposed to a foreign agent, such as an immunogenic protein, B cells in lymphoid organs undergo germinal center reactions. The image on the left is an immunized mouse spleen with activated B cells (brown) that produce antibodies. At right, top: a scanning electron micrograph of porous synthetic immune organs that enable rapid proliferation and activation of B cells into antibody-producing cells. At right, bottom: primary B cell viability and distribution is visible 24 hours following encapsulation procedure. Courtesy: Cornell University

When exposed to a foreign agent, such as an immunogenic protein, B cells in lymphoid organs undergo germinal center reactions. The image on the left is an immunized mouse spleen with activated B cells (brown) that produce antibodies. At right, top: a scanning electron micrograph of porous synthetic immune organs that enable rapid proliferation and activation of B cells into antibody-producing cells. At right, bottom: primary B cell viability and distribution is visible 24 hours following encapsulation procedure. Courtesy: Cornell University

Here’s a link to and a citation for the paper,

Ex vivo Engineered Immune Organoids for Controlled Germinal Center Reactions by Alberto Purwada, Manish K. Jaiswal, Haelee Ahn, Takuya Nojima, Daisuke Kitamura, Akhilesh K. Gaharwar, Leandro Cerchietti, & Ankur Singh. Biomaterials DOI: 10.1016/j.biomaterials.2015.06.002 Available online 3 June 2015

This paper is behind a paywall.