Tag Archives: Harvard University

Evolution of literature as seen by a classicist, a biologist and a computer scientist

Studying intertextuality shows how books are related in various ways and are reorganized and recombined over time. Image courtesy of Elena Poiata.

I find the image more instructive when I read it from the bottom up. For those who prefer to prefer to read from the top down, there’s this April 5, 2017 University of Texas at Austin news release (also on EurekAlert),

A classicist, biologist and computer scientist all walk into a room — what comes next isn’t the punchline but a new method to analyze relationships among ancient Latin and Greek texts, developed in part by researchers from The University of Texas at Austin.

Their work, referred to as quantitative criticism, is highlighted in a study published in the Proceedings of the National Academy of Sciences. The paper identifies subtle literary patterns in order to map relationships between texts and more broadly to trace the cultural evolution of literature.

“As scholars of the humanities well know, literature is a system within which texts bear a multitude of relationships to one another. Understanding what is distinctive about one text entails knowing how it fits within that system,” said Pramit Chaudhuri, associate professor in the Department of Classics at UT Austin. “Our work seeks to harness the power of quantification and computation to describe those relationships at macro and micro levels not easily achieved by conventional reading alone.”

In the study, the researchers create literary profiles based on stylometric features, such as word usage, punctuation and sentence structure, and use techniques from machine learning to understand these complex datasets. Taking a computational approach enables the discovery of small but important characteristics that distinguish one work from another — a process that could require years using manual counting methods.

“One aspect of the technical novelty of our work lies in the unusual types of literary features studied,” Chaudhuri said. “Much computational text analysis focuses on words, but there are many other important hallmarks of style, such as sound, rhythm and syntax.”

Another component of their work builds on Matthew Jockers’ literary “macroanalysis,” which uses machine learning to identify stylistic signatures of particular genres within a large body of English literature. Implementing related approaches, Chaudhuri and his colleagues have begun to trace the evolution of Latin prose style, providing new, quantitative evidence for the sweeping impact of writers such as Caesar and Livy on the subsequent development of Roman prose literature.

“There is a growing appreciation that culture evolves and that language can be studied as a cultural artifact, but there has been less research focused specifically on the cultural evolution of literature,” said the study’s lead author Joseph Dexter, a Ph.D. candidate in systems biology at Harvard University. “Working in the area of classics offers two advantages: the literary tradition is a long and influential one well served by digital resources, and classical scholarship maintains a strong interest in close linguistic study of literature.”

Unusually for a publication in a science journal, the paper contains several examples of the types of more speculative literary reading enabled by the quantitative methods introduced. The authors discuss the poetic use of rhyming sounds for emphasis and of particular vocabulary to evoke mood, among other literary features.

“Computation has long been employed for attribution and dating of literary works, problems that are unambiguous in scope and invite binary or numerical answers,” Dexter said. “The recent explosion of interest in the digital humanities, however, has led to the key insight that similar computational methods can be repurposed to address questions of literary significance and style, which are often more ambiguous and open ended. For our group, this humanist work of criticism is just as important as quantitative methods and data.”

The paper is the work of the Quantitative Criticism Lab (www.qcrit.org), co-directed by Chaudhuri and Dexter in collaboration with researchers from several other institutions. It is funded in part by a 2016 National Endowment for the Humanities grant and the Andrew W. Mellon Foundation New Directions Fellowship, awarded in 2016 to Chaudhuri to further his education in statistics and biology. Chaudhuri was one of 12 scholars selected for the award, which provides humanities researchers the opportunity to train outside of their own area of special interest with a larger goal of bridging the humanities and social sciences.

Here’s another link to the paper along with a citation,

Quantitative criticism of literary relationships by Joseph P. Dexter, Theodore Katz, Nilesh Tripuraneni, Tathagata Dasgupta, Ajay Kannan, James A. Brofos, Jorge A. Bonilla Lopez, Lea A. Schroeder, Adriana Casarez, Maxim Rabinovich, Ayelet Haimson Lushkov, and Pramit Chaudhuri. PNAS Published online before print April 3, 2017, doi: 10.1073/pnas.1611910114

This paper appears to be open access.

Formation of a time (temporal) crystal

It’s a crystal arranged in time according to a March 8, 2017 University of Texas at Austin news release (also on EurekAlert), Note: Links have been removed,

Salt, snowflakes and diamonds are all crystals, meaning their atoms are arranged in 3-D patterns that repeat. Today scientists are reporting in the journal Nature on the creation of a phase of matter, dubbed a time crystal, in which atoms move in a pattern that repeats in time rather than in space.

The atoms in a time crystal never settle down into what’s known as thermal equilibrium, a state in which they all have the same amount of heat. It’s one of the first examples of a broad new class of matter, called nonequilibrium phases, that have been predicted but until now have remained out of reach. Like explorers stepping onto an uncharted continent, physicists are eager to explore this exotic new realm.

“This opens the door to a whole new world of nonequilibrium phases,” says Andrew Potter, an assistant professor of physics at The University of Texas at Austin. “We’ve taken these theoretical ideas that we’ve been poking around for the last couple of years and actually built it in the laboratory. Hopefully, this is just the first example of these, with many more to come.”

Some of these nonequilibrium phases of matter may prove useful for storing or transferring information in quantum computers.

Potter is part of the team led by researchers at the University of Maryland who successfully created the first time crystal from ions, or electrically charged atoms, of the element ytterbium. By applying just the right electrical field, the researchers levitated 10 of these ions above a surface like a magician’s assistant. Next, they whacked the atoms with a laser pulse, causing them to flip head over heels. Then they hit them again and again in a regular rhythm. That set up a pattern of flips that repeated in time.

Crucially, Potter noted, the pattern of atom flips repeated only half as fast as the laser pulses. This would be like pounding on a bunch of piano keys twice a second and notes coming out only once a second. This weird quantum behavior was a signature that he and his colleagues predicted, and helped confirm that the result was indeed a time crystal.

The team also consists of researchers at the National Institute of Standards and Technology, the University of California, Berkeley and Harvard University, in addition to the University of Maryland and UT Austin.

Frank Wilczek, a Nobel Prize-winning physicist at the Massachusetts Institute of Technology, was teaching a class about crystals in 2012 when he wondered whether a phase of matter could be created such that its atoms move in a pattern that repeats in time, rather than just in space.

Potter and his colleague Norman Yao at UC Berkeley created a recipe for building such a time crystal and developed ways to confirm that, once you had built such a crystal, it was in fact the real deal. That theoretical work was announced publically last August and then published in January in the journal Physical Review Letters.

A team led by Chris Monroe of the University of Maryland in College Park built a time crystal, and Potter and Yao helped confirm that it indeed had the properties they predicted. The team announced that breakthrough—constructing a working time crystal—last September and is publishing the full, peer-reviewed description today in Nature.

A team led by Mikhail Lukin at Harvard University created a second time crystal a month after the first team, in that case, from a diamond.

Here’s a link to and a citation for the paper,

Observation of a discrete time crystal by J. Zhang, P. W. Hess, A. Kyprianidis, P. Becker, A. Lee, J. Smith, G. Pagano, I.-D. Potirniche, A. C. Potter, A. Vishwanath, N. Y. Yao, & C. Monroe. Nature 543, 217–220 (09 March 2017) doi:10.1038/nature21413 Published online 08 March 2017

This paper is behind a paywall.

3D picture language for mathematics

There’s a new, 3D picture language for mathematics called ‘quon’ according to a March 3, 2017 news item on phys.org,

Galileo called mathematics the “language with which God wrote the universe.” He described a picture-language, and now that language has a new dimension.

The Harvard trio of Arthur Jaffe, the Landon T. Clay Professor of Mathematics and Theoretical Science, postdoctoral fellow Zhengwei Liu, and researcher Alex Wozniakowski has developed a 3-D picture-language for mathematics with potential as a tool across a range of topics, from pure math to physics.

Though not the first pictorial language of mathematics, the new one, called quon, holds promise for being able to transmit not only complex concepts, but also vast amounts of detail in relatively simple images. …

A March 2, 2017 Harvard University news release by Peter Reuell, which originated the news item, provides more context for the research,

“It’s a big deal,” said Jacob Biamonte of the Quantum Complexity Science Initiative after reading the research. “The paper will set a new foundation for a vast topic.”

“This paper is the result of work we’ve been doing for the past year and a half, and we regard this as the start of something new and exciting,” Jaffe said. “It seems to be the tip of an iceberg. We invented our language to solve a problem in quantum information, but we have already found that this language led us to the discovery of new mathematical results in other areas of mathematics. We expect that it will also have interesting applications in physics.”

When it comes to the “language” of mathematics, humans start with the basics — by learning their numbers. As we get older, however, things become more complex.

“We learn to use algebra, and we use letters to represent variables or other values that might be altered,” Liu said. “Now, when we look at research work, we see fewer numbers and more letters and formulas. One of our aims is to replace ‘symbol proof’ by ‘picture proof.’”

The new language relies on images to convey the same information that is found in traditional algebraic equations — and in some cases, even more.

“An image can contain information that is very hard to describe algebraically,” Liu said. “It is very easy to transmit meaning through an image, and easy for people to understand what they see in an image, so we visualize these concepts and instead of words or letters can communicate via pictures.”

“So this pictorial language for mathematics can give you insights and a way of thinking that you don’t see in the usual, algebraic way of approaching mathematics,” Jaffe said. “For centuries there has been a great deal of interaction between mathematics and physics because people were thinking about the same things, but from different points of view. When we put the two subjects together, we found many new insights, and this new language can take that into another dimension.”

In their most recent work, the researchers moved their language into a more literal realm, creating 3-D images that, when manipulated, can trigger mathematical insights.

“Where before we had been working in two dimensions, we now see that it’s valuable to have a language that’s Lego-like, and in three dimensions,” Jaffe said. “By pushing these pictures around, or working with them like an object you can deform, the images can have different mathematical meanings, and in that way we can create equations.”

Among their pictorial feats, Jaffe said, are the complex equations used to describe quantum teleportation. The researchers have pictures for the Pauli matrices, which are fundamental components of quantum information protocols. This shows that the standard protocols are topological, and also leads to discovery of new protocols.

“It turns out one picture is worth 1,000 symbols,” Jaffe said.

“We could describe this algebraically, and it might require an entire page of equations,” Liu added. “But we can do that in one picture, so it can capture a lot of information.”

Having found a fit with quantum information, the researchers are now exploring how their language might also be useful in a number of other subjects in mathematics and physics.

“We don’t want to make claims at this point,” Jaffe said, “but we believe and are thinking about quite a few other areas where this picture-language could be important.”

Sadly, there are no artistic images illustrating quon but this is from the paper,

An n-quon is represented by n hemispheres. We call the flat disc on the boundary of each hemisphere a boundary disc. Each hemisphere contains a neutral diagram with four boundary points on its boundary disk. The dotted box designates the internal structure that specifies the quon vector. For example, the 3-quon is represented as

Courtesy: PNAS and Harvard University

I gather the term ‘quon’ is meant to suggest quantum particles.

Here’s a link and a citation for the paper,

Quon 3D language for quantum information by Zhengwei Liu, Alex Wozniakowski, and Arthur M. Jaffe. Proceedins of the National Academy of Sciences Published online before print February 6, 2017, doi: 10.1073/pnas.1621345114 PNAS March 7, 2017 vol. 114 no. 10

This paper appears to be open access.

Portable nanofibre fabrication device (point-of-use manufacturing)

A portable nanofiber fabrication device is quite an achievement although it seems it’s not quite ready for prime time yet. From a March 1, 2017 news item on Nanowerk (Note: A link has been removed),

Harvard researchers have developed a lightweight, portable nanofiber fabrication device that could one day be used to dress wounds on a battlefield or dress shoppers in customizable fabrics. The research was published recently in Macromolecular Materials and Engineering (“Design and Fabrication of Fibrous Nanomaterials Using Pull Spinning”)

A schematic of the pull spinning apparatus with a side view illustration of a fiber being pulled from the polymer reservoir. The pull spinning system consists of a rotating bristle that dips and pulls a polymer jet in a spiral trajectory (Leila Deravi/Harvard University)

A March 1, 2017 Harvard University news release (also on EurekAlert) by Leah Burrow,, which originated the news item, describes the current process for nanofiber fabrication and explains how this technique is an improvement,

There are many ways to make nanofibers. These versatile materials — whose target applications include everything from tissue engineering to bullet proof vests — have been made using centrifugal force, capillary force, electric field, stretching, blowing, melting, and evaporation.

Each of these fabrication methods has pros and cons. For example, Rotary Jet-Spinning (RJS) and Immersion Rotary Jet-Spinning (iRJS) are novel manufacturing techniques developed in the Disease Biophysics Group at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering. Both RJS and iRJS dissolve polymers and proteins in a liquid solution and use centrifugal force or precipitation to elongate and solidify polymer jets into nanoscale fibers. These methods are great for producing large amounts of a range of materials – including DNA, nylon, and even Kevlar – but until now they haven’t been particularly portable.

The Disease Biophysics Group recently announced the development of a hand-held device that can quickly produce nanofibers with precise control over fiber orientation. Regulating fiber alignment and deposition is crucial when building nanofiber scaffolds that mimic highly aligned tissue in the body or designing point-of-use garments that fit a specific shape.

“Our main goal for this research was to make a portable machine that you could use to achieve controllable deposition of nanofibers,” said Nina Sinatra, a graduate student in the Disease Biophysics Group and co-first author of the paper. “In order to develop this kind of point-and-shoot device, we needed a technique that could produce highly aligned fibers with a reasonably high throughput.”

The new fabrication method, called pull spinning, uses a high-speed rotating bristle that dips into a polymer or protein reservoir and pulls a droplet from solution into a jet. The fiber travels in a spiral trajectory and solidifies before detaching from the bristle and moving toward a collector. Unlike other processes, which involve multiple manufacturing variables, pull spinning requires only one processing parameter — solution viscosity — to regulate nanofiber diameter. Minimal process parameters translate to ease of use and flexibility at the bench and, one day, in the field.

Pull spinning works with a range of different polymers and proteins. The researchers demonstrated proof-of-concept applications using polycaprolactone and gelatin fibers to direct muscle tissue growth and function on bioscaffolds, and nylon and polyurethane fibers for point-of-wear apparel.

“This simple, proof-of-concept study demonstrates the utility of this system for point-of-use manufacturing,” said Kit Parker, the Tarr Family Professor of Bioengineering and Applied Physics and director of the Disease Biophysics Group. “Future applications for directed production of customizable nanotextiles could extend to spray-on sportswear that gradually heats or cools an athlete’s body, sterile bandages deposited directly onto a wound, and fabrics with locally varying mechanical properties.”

Here’s a link to and a citation for the paper,

Design and Fabrication of Fibrous Nanomaterials Using Pull Spinning by Leila F. Deravi, Nina R. Sinatra, Christophe O. Chantre, Alexander P. Nesmith, Hongyan Yuan, Sahm K. Deravi, Josue A. Goss, Luke A. MacQueen, Mohammad R. Badrossamy, Grant M. Gonzalez, Michael D. Phillips, and Kevin Kit Parker. Macromolecular Materials and Engineering DOI: 10.1002/mame.201600404 Version of Record online: 17 JAN 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

CRISPR patent decision: Harvard’s and MIT’s Broad Institute victorious—for now

I have written about the CRISPR patent tussle (Harvard & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley) previously in a Jan. 6, 2015 posting and in a more detailed May 14, 2015 posting. I also mentioned (in a Jan. 17, 2017 posting) CRISPR and its patent issues in the context of a posting about a Slate.com series on Frankenstein and the novel’s applicability to our own time. This patent fight is being bitterly fought as fortunes are at stake.

It seems a decision has been made regarding the CRISPR patent claims. From a Feb. 17, 2017 article by Charmaine Distor for The Science Times,

After an intense court battle, the US Patent and Trademark Office (USPTO) released its ruling on February 15 [2017]. The rights for the CRISPR-Cas9 gene editing technology was handed over to the Broad Institute of Harvard University and the Massachusetts Institute of Technology (MIT).

According to an article in Nature, the said court battle was between the Broad Institute and the University of California. The two institutions are fighting over the intellectual property right for the CRISPR patent. The case between the two started when the patent was first awarded to the Broad Institute despite having the University of California apply first for the CRISPR patent.

Heidi Ledford’s Feb. 17, 2017 article for Nature provides more insight into the situation (Note: Links have been removed),

It [USPTO] ruled that the Broad Institute of Harvard and MIT in Cambridge could keep its patents on using CRISPR–Cas9 in eukaryotic cells. That was a blow to the University of California in Berkeley, which had filed its own patents and had hoped to have the Broad’s thrown out.

The fight goes back to 2012, when Jennifer Doudna at Berkeley, Emmanuelle Charpentier, then at the University of Vienna, and their colleagues outlined how CRISPR–Cas9 could be used to precisely cut isolated DNA1. In 2013, Feng Zhang at the Broad and his colleagues — and other teams — showed2 how it could be adapted to edit DNA in eukaryotic cells such as plants, livestock and humans.

Berkeley filed for a patent earlier, but the USPTO granted the Broad’s patents first — and this week upheld them. There are high stakes involved in the ruling. The holder of key patents could make millions of dollars from CRISPR–Cas9’s applications in industry: already, the technique has sped up genetic research, and scientists are using it to develop disease-resistant livestock and treatments for human diseases.

But the fight for patent rights to CRISPR technology is by no means over. Here are four reasons why.

1. Berkeley can appeal the ruling

2. European patents are still up for grabs

3. Other parties are also claiming patent rights on CRISPR–Cas9

4. CRISPR technology is moving beyond what the patents cover

As for Ledford’s 3rd point, there are an estimated 763 patent families (groups of related patents) claiming CAS9 leading to the distinct possibility that the Broad Institute will be fighting many patent claims in the future.

Once you’ve read Distor’s and Ledford’s articles, you may want to check out Adam Rogers’ and Eric Niiler’s Feb. 16, 2017 CRISPR patent article for Wired,

The fight over who owns the most promising technique for editing genes—cutting and pasting the stuff of life to cure disease and advance scientific knowledge—has been a rough one. A team on the West Coast, at UC Berkeley, filed patents on the method, Crispr-Cas9; a team on the East Coast, based at MIT and the Broad Institute, filed their own patents in 2014 after Berkeley’s, but got them granted first. The Berkeley group contended that this constituted “interference,” and that Berkeley deserved the patent.

At stake: millions, maybe billions of dollars in biotech money and licensing fees, the future of medicine, the future of bioscience. Not nothing. Who will benefit depends on who owns the patents.

On Wednesday [Feb. 15, 2017], the US Patent Trial and Appeal Board kind of, sort of, almost began to answer that question. Berkeley will get the patent for using the system called Crispr-Cas9 in any living cell, from bacteria to blue whales. Broad/MIT gets the patent in eukaryotic cells, which is to say, plants and animals.

It’s … confusing. “The patent that the Broad received is for the use of Crispr gene-editing technology in eukaryotic cells. The patent for the University of California is for all cells,” says Jennifer Doudna, the UC geneticist and co-founder of Caribou Biosciences who co-invented Crispr, on a conference call. Her metaphor: “They have a patent on green tennis balls; we have a patent for all tennis balls.”

Observers didn’t quite buy that topspin. If Caribou is playing tennis, it’s looking like Broad/MIT is Serena Williams.

“UC does not necessarily lose everything, but they’re no doubt spinning the story,” says Robert Cook-Deegan, an expert in genetic policy at Arizona State University’s School for the Future of Innovation in Society. “UC’s claims to eukaryotic uses of Crispr-Cas9 will not be granted in the form they sought. That’s a big deal, and UC was the big loser.”

UC officials said Wednesday [Feb. 15, 2017] that they are studying the 51-page decision and considering whether to appeal. That leaves members of the biotechnology sector wondering who they will have to pay to use Crispr as part of a business—and scientists hoping the outcome won’t somehow keep them from continuing their research.

….

Happy reading!

What is a multiregional brain-on-a-chip?

In response to having created a multiregional brain-on-a-chip, there’s an explanation from the team at Harvard University (which answers my question) in a Jan. 13, 2017 Harvard John A. Paulson School of Engineering and Applied Sciences news release (also on EurekAlert) by Leah Burrows,

Harvard University researchers have developed a multiregional brain-on-a-chip that models the connectivity between three distinct regions of the brain. The in vitro model was used to extensively characterize the differences between neurons from different regions of the brain and to mimic the system’s connectivity.

“The brain is so much more than individual neurons,” said Ben Maoz, co-first author of the paper and postdoctoral fellow in the Disease Biophysics Group in the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). “It’s about the different types of cells and the connectivity between different regions of the brain. When modeling the brain, you need to be able to recapitulate that connectivity because there are many different diseases that attack those connections.”

“Roughly twenty-six percent of the US healthcare budget is spent on neurological and psychiatric disorders,” said Kit Parker, the Tarr Family Professor of Bioengineering and Applied Physics Building at SEAS and Core Faculty Member of the Wyss Institute for Biologically Inspired Engineering at Harvard University. “Tools to support the development of therapeutics to alleviate the suffering of these patients is not only the human thing to do, it is the best means of reducing this cost.”

Researchers from the Disease Biophysics Group at SEAS and the Wyss Institute modeled three regions of the brain most affected by schizophrenia — the amygdala, hippocampus and prefrontal cortex.

They began by characterizing the cell composition, protein expression, metabolism, and electrical activity of neurons from each region in vitro.

“It’s no surprise that neurons in distinct regions of the brain are different but it is surprising just how different they are,” said Stephanie Dauth, co-first author of the paper and former postdoctoral fellow in the Disease Biophysics Group. “We found that the cell-type ratio, the metabolism, the protein expression and the electrical activity all differ between regions in vitro. This shows that it does make a difference which brain region’s neurons you’re working with.”

Next, the team looked at how these neurons change when they’re communicating with one another. To do that, they cultured cells from each region independently and then let the cells establish connections via guided pathways embedded in the chip.

The researchers then measured cell composition and electrical activity again and found that the cells dramatically changed when they were in contact with neurons from different regions.

“When the cells are communicating with other regions, the cellular composition of the culture changes, the electrophysiology changes, all these inherent properties of the neurons change,” said Maoz. “This shows how important it is to implement different brain regions into in vitro models, especially when studying how neurological diseases impact connected regions of the brain.”

To demonstrate the chip’s efficacy in modeling disease, the team doped different regions of the brain with the drug Phencyclidine hydrochloride — commonly known as PCP — which simulates schizophrenia. The brain-on-a-chip allowed the researchers for the first time to look at both the drug’s impact on the individual regions as well as its downstream effect on the interconnected regions in vitro.

The brain-on-a-chip could be useful for studying any number of neurological and psychiatric diseases, including drug addiction, post traumatic stress disorder, and traumatic brain injury.

“To date, the Connectome project has not recognized all of the networks in the brain,” said Parker. “In our studies, we are showing that the extracellular matrix network is an important part of distinguishing different brain regions and that, subsequently, physiological and pathophysiological processes in these brain regions are unique. This advance will not only enable the development of therapeutics, but fundamental insights as to how we think, feel, and survive.”

Here’s an image from the researchers,

Caption: Image of the in vitro model showing three distinct regions of the brain connected by axons. Credit: Disease Biophysics Group/Harvard University

Here’s a link to and a citation for the paper,

Neurons derived from different brain regions are inherently different in vitro: A novel multiregional brain-on-a-chip by Stephanie Dauth, Ben M Maoz, Sean P Sheehy, Matthew A Hemphill, Tara Murty, Mary Kate Macedonia, Angie M Greer, Bogdan Budnik, Kevin Kit Parker. Journal of Neurophysiology Published 28 December 2016 Vol. no. [?] , DOI: 10.1152/jn.00575.2016

This paper is behind a paywall and they haven’t included the vol. no. in the citation I’ve found.

Phagocytosis for a bioelectronic future

The process by which a cell engulfs matter is known as phagocytosis. One of the best known examples of failed phagocytosis is that of asbestos fibres in the lungs where lung cells have attempted to engulf a fibre that’s just too big and ends up piercing the cell. When enough of the cells are pierced, the person is diagnosed with mesothelioma.

This particular example of phagocytosis is a happier one according to a Dec. 16, 2016 article by Meghan Rosen for ScienceNews,

Human cells can snack on silicon.

Cells grown in the lab devour nano-sized wires of silicon through an engulfing process known as phagocytosis, scientists report December 16 in Science Advances.

Silicon-infused cells could merge electronics with biology, says John Zimmerman, a biophysicist now at Harvard University. “It’s still very early days,” he adds, but “the idea is to get traditional electronic devices working inside of cells.” Such hybrid devices could one day help control cellular behavior, or even replace electronics used for deep brain stimulation, he says.

Scientists have been trying to load electronic parts inside cells for years. One way is to zap holes in cells with electricity, which lets big stuff, like silicon nanowires linked to bulky materials, slip in. Zimmerman, then at the University of Chicago, and colleagues were looking for a simpler technique, something that would let tiny nanowires in easily and could potentially allow them to travel through a person’s bloodstream — like a drug.

A Dec. 22, 2016 University of Chicago news release by Matt Wood provides more detail,

“You can treat it as a non-genetic, synthetic biology platform,” said Bozhi Tian, PhD, assistant professor of chemistry and senior author of the new study. “Traditionally in biology we use genetic engineering and modify genetic parts. Now we can use silicon parts, and silicon can be internalized. You can target those silicon parts to specific parts of the cell and modulate that behavior with light.”

In the new study, Tian and his team show how cells consume or internalize the nanowires through phagocytosis, the same process they use to engulf and ingest nutrients and other particles in their environment. The nanowires are simply added to cell media, the liquid solution the cells live in, the same way you might administer a drug, and the cells take it from there. Eventually, the goal would be to inject them into the bloodstream or package them into a pill.

Once inside, the nanowires can interact directly with individual parts of the cell, organelles like the mitochondria, nucleus and cytoskeletal filaments. Researchers can then stimulate the nanowires with light to see how individual components of the cell respond, or even change the behavior of the cell. They can last up to two weeks inside the cell before biodegrading.

Seeing how individual parts of a cell respond to stimulation could give researchers insight into how medical treatments that use electrical stimulation work at a more detailed level. For instance, deep brain stimulation helps treat tremors from movement disorders like Parkinson’s disease by sending electrical signals to areas of the brain. Doctors know it works at the level of tissues and brain structures, but seeing how individual components of nerve cells react to these signals could help fine tune and improve the treatment.

The experiments in the study used umbilical vascular endothelial cells, which make up blood vessel linings in the umbilical cord. These cells readily took up the nanowires, but others, like cardiac muscle cells, did not. Knowing that some cells consume the wires and some don’t could also prove useful in experimental settings and give researchers more ways to target specific cell types.

Tian and his team manufactures the nanowires in their lab with a chemical vapor deposition system that grows the silicon structures to different specifications. They can adjust size, shape, and electrical properties as needed, or even add defects on purpose for testing. They can also make wires with porous surfaces that could deliver drugs or genetic material to the cells. The process gives them a variety of ways to manipulate the properties of the nanowires for research.

Seeing how individual parts of a cell respond to stimulation could give researchers insight into how medical treatments that use electrical stimulation work at a more detailed level. For instance, deep brain stimulation helps treat tremors from movement disorders like Parkinson’s disease by sending electrical signals to areas of the brain. Doctors know it works at the level of tissues and brain structures, but seeing how individual components of nerve cells react to these signals could help fine tune and improve the treatment.

The experiments in the study used umbilical vascular endothelial cells, which make up blood vessel linings in the umbilical cord. These cells readily took up the nanowires, but others, like cardiac muscle cells, did not. Knowing that some cells consume the wires and some don’t could also prove useful in experimental settings and give researchers more ways to target specific cell types.

Tian and his team manufactures the nanowires in their lab with a chemical vapor deposition system that grows the silicon structures to different specifications. They can adjust size, shape, and electrical properties as needed, or even add defects on purpose for testing. They can also make wires with porous surfaces that could deliver drugs or genetic material to the cells. The process gives them a variety of ways to manipulate the properties of the nanowires for research.

Here’s a link to and a citation for the paper,

Cellular uptake and dynamics of unlabeled freestanding silicon nanowires by John F. Zimmerman, Ramya Parameswaran, Graeme Murray, Yucai Wang, Michael Burke, and Bozhi Tian. Science Advances  16 Dec 2016: Vol. 2, no. 12, e1601039 DOI: 10.1126/sciadv.1601039

This paper appears to be open access.

Montreal Neuro creates a new paradigm for technology transfer?

It’s one heck of a Christmas present. Canadian businessmen Larry Tannenbaum and his wife Judy have given the Montreal Neurological Institute (Montreal Neuro), which is affiliated with McGill University, a $20M donation. From a Dec. 16, 2016 McGill University news release,

The Prime Minister of Canada, Justin Trudeau, was present today at the Montreal Neurological Institute and Hospital (MNI) for the announcement of an important donation of $20 million by the Larry and Judy Tanenbaum family. This transformative gift will help to establish the Tanenbaum Open Science Institute, a bold initiative that will facilitate the sharing of neuroscience findings worldwide to accelerate the discovery of leading edge therapeutics to treat patients suffering from neurological diseases.

‟Today, we take an important step forward in opening up new horizons in neuroscience research and discovery,” said Mr. Larry Tanenbaum. ‟Our digital world provides for unprecedented opportunities to leverage advances in technology to the benefit of science.  That is what we are celebrating here today: the transformation of research, the removal of barriers, the breaking of silos and, most of all, the courage of researchers to put patients and progress ahead of all other considerations.”

Neuroscience has reached a new frontier, and advances in technology now allow scientists to better understand the brain and all its complexities in ways that were previously deemed impossible. The sharing of research findings amongst scientists is critical, not only due to the sheer scale of data involved, but also because diseases of the brain and the nervous system are amongst the most compelling unmet medical needs of our time.

Neurological diseases, mental illnesses, addictions, and brain and spinal cord injuries directly impact 1 in 3 Canadians, representing approximately 11 million people across the country.

“As internationally-recognized leaders in the field of brain research, we are uniquely placed to deliver on this ambitious initiative and reinforce our reputation as an institution that drives innovation, discovery and advanced patient care,” said Dr. Guy Rouleau, Director of the Montreal Neurological Institute and Hospital and Chair of McGill University’s Department of Neurology and Neurosurgery. “Part of the Tanenbaum family’s donation will be used to incentivize other Canadian researchers and institutions to adopt an Open Science model, thus strengthening the network of like-minded institutes working in this field.”

What they don’t mention in the news release is that they will not be pursuing any patents (for five years according to one of the people in the video but I can’t find text to substantiate that time limit*; there are no time limits noted elsewhere) on their work. For this detail and others, you have to listen to the video they’ve created,

The CBC (Canadian Broadcasting Corporation) news online Dec. 16, 2016 posting (with files from Sarah Leavitt and Justin Hayward) adds a few personal details about Tannenbaum,

“Our goal is simple: to accelerate brain research and discovery to relieve suffering,” said Tanenbaum.

Tanenbaum, a Canadian businessman and chairman of Maple Leaf Sports and Entertainment, said many of his loved ones suffered from neurological disorders.

“I lost my mother to Alzheimer’s, my father to a stroke, three dear friends to brain cancer, and a brilliant friend and scientist to clinical depression,” said Tanenbaum.

He hopes the institute will serve as the template for science research across the world, a thought that Trudeau echoed.

“This vision around open science, recognizing the role that Canada can and should play, the leadership that Canadians can have in this initiative is truly, truly exciting,” said Trudeau.

The Neurological Institute says the pharmaceutical industry is supportive of the open science concept because it will provide crucial base research that can later be used to develop drugs to fight an array of neurological conditions.

Jack Stilgoe in a Dec. 16, 2016 posting on the Guardian blogs explains what this donation could mean (Note: Links have been removed),

With the help of Tanenbaum’s gift of 20 million Canadian dollars (£12million) the ‘Neuro’, the Montreal Neurological Institute and Hospital, is setting up an experiment in experimentation, an Open Science Initiative with the express purpose of finding out the best way to realise the potential of scientific research.

Governments in science-rich countries are increasingly concerned that they do not appear to reaping the economic returns they feel they deserve from investments in scientific research. Their favoured response has been to try to bridge what they see as a ‘valley of death’ between basic scientific research and industrial applications. This has meant more funding for ‘translational research’ and the flowering of technology transfer offices within universities.

… There are some success stories, particularly in the life sciences. Patents from the work of Richard Axel at Columbia University at one point brought the university almost $100 million per year. The University of Florida received more than $150 million for inventing Gatorade in the 1960s. The stakes are high in the current battle between Berkely and MIT/Harvard over who owns the rights to the CRISPR/Cas9 system that has revolutionised genetic engineering and could be worth billions.

Policymakers imagine a world in which universities pay for themselves just as a pharmaceutical research lab does. However, for critics of technology transfer, such stories blind us to the reality of university’s entrepreneurial abilities.

For most universities, evidence of their money-making prowess is, to put it charitably, mixed. A recent Bloomberg report shows how quickly university patent incomes plunge once we look beyond the megastars. In 2014, just 15 US universities earned 70% of all patent royalties. British science policy researchers Paul Nightingale and Alex Coad conclude that ‘Roughly 9/10 US universities lose money on their technology transfer offices… MIT makes more money from selling T-shirts than it does from licensing’. A report from the Brookings institute concluded that the model of technology transfer ‘is unprofitable for most universities and sometimes even risks alienating the private sector’. In the UK, the situation is even worse. Businesses who have dealings with universities report that their technology transfer offices are often unrealistic in negotiations. In many cases, academics are, like a small child who refuses to let others play with a brand new football, unable to make the most of their gifts. And areas of science outside the life sciences are harder to patent than medicines, sports drinks and genetic engineering techniques. Trying too hard to force science towards the market may be, to use the phrase of science policy professor Keith Pavitt, like pushing a piece of string.

Science policy is slowly waking up to the realisation that the value of science may lie in people and places rather than papers and patents. It’s an idea that the Neuro, with the help of Tanenbaum’s gift, is going to test. By sharing data and giving away intellectual property, the initiative aims to attract new private partners to the institute and build Montreal as a hub for knowledge and innovation. The hypothesis is that this will be more lucrative than hoarding patents.

This experiment is not wishful thinking. It will be scientifically measured. It is the job of Richard Gold, a McGill University law professor, to see whether it works. He told me that his first task is ‘to figure out what to counts… There’s going to be a gap between what we would like to measure and what we can measure’. However, he sees an open-mindedness among his colleagues that is unusual. Some are evangelists for open science; some are sceptics. But they share a curiosity about new approaches and a recognition of a problem in neuroscience: ‘We haven’t come up with a new drug for Parkinson’s in 30 years. We don’t even understand the biological basis for many of these diseases. So whatever we’re doing at the moment doesn’t work’. …

Montreal Neuro made news on the ‘open science’ front in January 2016 when it formally announced its research would be freely available and that researchers would not be pursuing patents (see my January 22, 2016 posting).

I recommend reading Stilgoe’s posting in its entirety and for those who don’t know or have forgotten, Prime Minister’s Trudeau’s family has some experience with mental illness. His mother has been very open about her travails. This makes his presence at the announcement perhaps a bit more meaningful than the usual political presence at a major funding announcement.

*The five-year time limit is confirmed in a Feb. 17, 2017 McGill University news release about their presentations at the AAAS (American Association for the Advancement of Science) 2017 annual meeting) on EurekAlert,

umpstarting Neurological Research through Open Science – MNI & McGill University

Friday, February 17, 2017, 1:30-2:30 PM/ Room 208

Neurological research is advancing too slowly according to Dr. Guy Rouleau, director of the Montreal Neurological Institute (MNI) of McGill University. To speed up discovery, MNI has become the first ever Open Science academic institution in the world. In a five-year experiment, MNI is opening its books and making itself transparent to an international group of social scientists, policymakers, industrial partners, and members of civil society. They hope, by doing so, to accelerate research and the discovery of new treatments for patients with neurological diseases, and to encourage other leading institutions around the world to consider a similar model. A team led by McGill Faculty of Law’s Professor Richard Gold will monitor and evaluate how well the MNI Open Science experiment works and provide the scientific and policy worlds with insight into 21st century university-industry partnerships. At this workshop, Rouleau and Gold will discuss the benefits and challenges of this open-science initiative.

Sustainable Nanotechnologies (SUN) project draws to a close in March 2017

Two Oct. 31, 2016 news item on Nanowerk signal the impending sunset date for the European Union’s Sustainable Nanotechnologies (SUN) project. The first Oct. 31, 2016 news item on Nanowerk describes the projects latest achievements,

The results from the 3rd SUN annual meeting showed great advancement of the project. The meeting was held in Edinburgh, Scotland, UK on 4-5 October 2016 where the project partners presented the results obtained during the second reporting period of the project.

SUN is a three and a half year EU project, running from 2013 to 2017, with a budget of about €14 million. Its main goal is to evaluate the risks along the supply chain of engineered nanomaterials and incorporate the results into tools and guidelines for sustainable manufacturing.

The ultimate goal of the SUN Project is the development of an online software Decision Support System – SUNDS – aimed at estimating and managing occupational, consumer, environmental and public health risks from nanomaterials in real industrial products along their lifecycles. The SUNDS beta prototype has been released last October, 2015, and since then the main focus has been on refining the methodologies and testing them on selected case studies i.e. nano-copper oxide based wood preserving paint and nano- sized colourants for plastic car part: organic pigment and carbon black. Obtained results and open issues were discussed during the third annual meeting in order collect feedbacks from the consortium that will inform, in the next months, the implementation of the final version of the SUNDS software system, due by March 2017.

An Oct. 27, 2016 SUN project press release, which originated the news item, adds more information,

Significant interest has been payed towards the results obtained in WP2 (Lifecycle Thinking) which main objectives are to assess the environmental impacts arising from each life cycle stage of the SUN case studies (i.e. Nano-WC-Cobalt (Tungsten Carbide-cobalt) sintered ceramics, Nanocopper wood preservatives, Carbon Nano Tube (CNT) in plastics, Silicon Dioxide (SiO2) as food additive, Nano-Titanium Dioxide (TiO2) air filter system, Organic pigment in plastics and Nanosilver (Ag) in textiles), and compare them to conventional products with similar uses and functionality, in order to develop and validate criteria and guiding principles for green nano-manufacturing. Specifically, the consortium partner COLOROBBIA CONSULTING S.r.l. expressed its willingness to exploit the results obtained from the life cycle assessment analysis related to nanoTiO2 in their industrial applications.

On 6th October [2016], the discussions about the SUNDS advancement continued during a Stakeholder Workshop, where representatives from industry, regulatory and insurance sectors shared their feedback on the use of the decision support system. The recommendations collected during the workshop will be used for the further refinement and implemented in the final version of the software which will be released by March 2017.

The second Oct. 31, 2016 news item on Nanowerk led me to this Oct. 27, 2016 SUN project press release about the activities in the upcoming final months,

The project has designed its final events to serve as an effective platform to communicate the main results achieved in its course within the Nanosafety community and bridge them to a wider audience addressing the emerging risks of Key Enabling Technologies (KETs).

The series of events include the New Tools and Approaches for Nanomaterial Safety Assessment: A joint conference organized by NANOSOLUTIONS, SUN, NanoMILE, GUIDEnano and eNanoMapper to be held on 7 – 9 February 2017 in Malaga, Spain, the SUN-CaLIBRAte Stakeholders workshop to be held on 28 February – 1 March 2017 in Venice, Italy and the SRA Policy Forum: Risk Governance for Key Enabling Technologies to be held on 1- 3 March in Venice, Italy.

Jointly organized by the Society for Risk Analysis (SRA) and the SUN Project, the SRA Policy Forum will address current efforts put towards refining the risk governance of emerging technologies through the integration of traditional risk analytic tools alongside considerations of social and economic concerns. The parallel sessions will be organized in 4 tracks:  Risk analysis of engineered nanomaterials along product lifecycle, Risks and benefits of emerging technologies used in medical applications, Challenges of governing SynBio and Biotech, and Methods and tools for risk governance.

The SRA Policy Forum has announced its speakers and preliminary Programme. Confirmed speakers include:

  • Keld Alstrup Jensen (National Research Centre for the Working Environment, Denmark)
  • Elke Anklam (European Commission, Belgium)
  • Adam Arkin (University of California, Berkeley, USA)
  • Phil Demokritou (Harvard University, USA)
  • Gerard Escher (École polytechnique fédérale de Lausanne, Switzerland)
  • Lisa Friedersdor (National Nanotechnology Initiative, USA)
  • James Lambert (President, Society for Risk Analysis, USA)
  • Andre Nel (The University of California, Los Angeles, USA)
  • Bernd Nowack (EMPA, Switzerland)
  • Ortwin Renn (University of Stuttgart, Germany)
  • Vicki Stone (Heriot-Watt University, UK)
  • Theo Vermeire (National Institute for Public Health and the Environment (RIVM), Netherlands)
  • Tom van Teunenbroek (Ministry of Infrastructure and Environment, The Netherlands)
  • Wendel Wohlleben (BASF, Germany)

The New Tools and Approaches for Nanomaterial Safety Assessment (NMSA) conference aims at presenting the main results achieved in the course of the organizing projects fostering a discussion about their impact in the nanosafety field and possibilities for future research programmes.  The conference welcomes consortium partners, as well as representatives from other EU projects, industry, government, civil society and media. Accordingly, the conference topics include: Hazard assessment along the life cycle of nano-enabled products, Exposure assessment along the life cycle of nano-enabled products, Risk assessment & management, Systems biology approaches in nanosafety, Categorization & grouping of nanomaterials, Nanosafety infrastructure, Safe by design. The NMSA conference key note speakers include:

  • Harri Alenius (University of Helsinki, Finland,)
  • Antonio Marcomini (Ca’ Foscari University of Venice, Italy)
  • Wendel Wohlleben (BASF, Germany)
  • Danail Hristozov (Ca’ Foscari University of Venice, Italy)
  • Eva Valsami-Jones (University of Birmingham, UK)
  • Socorro Vázquez-Campos (LEITAT Technolоgical Center, Spain)
  • Barry Hardy (Douglas Connect GmbH, Switzerland)
  • Egon Willighagen (Maastricht University, Netherlands)
  • Nina Jeliazkova (IDEAconsult Ltd., Bulgaria)
  • Haralambos Sarimveis (The National Technical University of Athens, Greece)

During the SUN-caLIBRAte Stakeholder workshop the final version of the SUN user-friendly, software-based Decision Support System (SUNDS) for managing the environmental, economic and social impacts of nanotechnologies will be presented and discussed with its end users: industries, regulators and insurance sector representatives. The results from the discussion will be used as a foundation of the development of the caLIBRAte’s Risk Governance framework for assessment and management of human and environmental risks of MN and MN-enabled products.

The SRA Policy Forum: Risk Governance for Key Enabling Technologies and the New Tools and Approaches for Nanomaterial Safety Assessment conference are now open for registration. Abstracts for the SRA Policy Forum can be submitted till 15th November 2016.
For further information go to:
www.sra.org/riskgovernanceforum2017
http://www.nmsaconference.eu/

There you have it.

Superconductivity with spin

Vivid lines of light tracing a pattern reminiscent of a spinning top toy Courtesy: Harvard University

Vivid lines of light tracing a pattern reminiscent of a spinning top toy Courtesy: Harvard University

An Oct. 14, 2016 Harvard University John A. Paulson School of Engineering and Applied Sciences (SEAS) press release (also on EurekAlert) by Leah Burrows describes how scientists have discovered a way to transmit spin information through supercapacitors,

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have made a discovery that could lay the foundation for quantum superconducting devices. Their breakthrough solves one the main challenges to quantum computing: how to transmit spin information through superconducting materials.

Every electronic device — from a supercomputer to a dishwasher — works by controlling the flow of charged electrons. But electrons can carry so much more information than just charge; electrons also spin, like a gyroscope on axis.

Harnessing electron spin is really exciting for quantum information processing because not only can an electron spin up or down — one or zero — but it can also spin any direction between the two poles. Because it follows the rules of quantum mechanics, an electron can occupy all of those positions at once. Imagine the power of a computer that could calculate all of those positions simultaneously.

A whole field of applied physics, called spintronics, focuses on how to harness and measure electron spin and build spin equivalents of electronic gates and circuits.

By using superconducting materials through which electrons can move without any loss of energy, physicists hope to build quantum devices that would require significantly less power.

But there’s a problem.

According to a fundamental property of superconductivity, superconductors can’t transmit spin. Any electron pairs that pass through a superconductor will have the combined spin of zero.

In work published recently in Nature Physics, the Harvard researchers found a way to transmit spin information through superconducting materials.

“We now have a way to control the spin of the transmitted electrons in simple superconducting devices,” said Amir Yacoby, Professor of Physics and of Applied Physics at SEAS and senior author of the paper.

It’s easy to think of superconductors as particle super highways but a better analogy would be a super carpool lane as only paired electrons can move through a superconductor without resistance.

These pairs are called Cooper Pairs and they interact in a very particular way. If the way they move in relation to each other (physicists call this momentum) is symmetric, then the pair’s spin has to be asymmetric — for example, one negative and one positive for a combined spin of zero. When they travel through a conventional superconductor, Cooper Pairs’ momentum has to be zero and their orbit perfectly symmetrical.

But if you can change the momentum to asymmetric — leaning toward one direction — then the spin can be symmetric. To do that, you need the help of some exotic (aka weird) physics.

Superconducting materials can imbue non-superconducting materials with their conductive powers simply by being in close proximity. Using this principle, the researchers built a superconducting sandwich, with superconductors on the outside and mercury telluride in the middle. The atoms in mercury telluride are so heavy and the electrons move so quickly, that the rules of relativity start to apply.

“Because the atoms are so heavy, you have electrons that occupy high-speed orbits,” said Hechen Ren, coauthor of the study and graduate student at SEAS. “When an electron is moving this fast, its electric field turns into a magnetic field which then couples with the spin of the electron. This magnetic field acts on the spin and gives one spin a higher energy than another.”

So, when the Cooper Pairs hit this material, their spin begins to rotate.

“The Cooper Pairs jump into the mercury telluride and they see this strong spin orbit effect and start to couple differently,” said Ren. “The homogenous breed of zero momentum and zero combined spin is still there but now there is also a breed of pairs that gains momentum, breaking the symmetry of the orbit. The most important part of that is that the spin is now free to be something other than zero.”

The team could measure the spin at various points as the electron waves moved through the material. By using an external magnet, the researchers could tune the total spin of the pairs.

“This discovery opens up new possibilities for storing quantum information. Using the underlying physics behind this discovery provides also new possibilities for exploring the underlying nature of superconductivity in novel quantum materials,” said Yacoby.

Here’s a link to and a citation for the paper,

Controlled finite momentum pairing and spatially varying order parameter in proximitized HgTe quantum wells by Sean Hart, Hechen Ren, Michael Kosowsky, Gilad Ben-Shach, Philipp Leubner, Christoph Brüne, Hartmut Buhmann, Laurens W. Molenkamp, Bertrand I. Halperin, & Amir Yacoby. Nature Physics (2016) doi:10.1038/nphys3877 Published online 19 September 2016

This paper is behind a paywall.