Tag Archives: Harvard University

‘Bionic’ cardiac patch with nanoelectric scaffolds and living cells

A June 27, 2016 news item on Nanowerk announced that Harvard University researchers may have taken us a step closer to bionic cardiac patches for human hearts (Note: A link has been removed),

Scientists and doctors in recent decades have made vast leaps in the treatment of cardiac problems – particularly with the development in recent years of so-called “cardiac patches,” swaths of engineered heart tissue that can replace heart muscle damaged during a heart attack.

Thanks to the work of Charles Lieber and others, the next leap may be in sight.

The Mark Hyman, Jr. Professor of Chemistry and Chair of the Department of Chemistry and Chemical Biology, Lieber, postdoctoral fellow Xiaochuan Dai and other co-authors of a study that describes the construction of nanoscale electronic scaffolds that can be seeded with cardiac cells to produce a “bionic” cardiac patch. The study is described in a June 27 [2016] paper published in Nature Nanotechnology (“Three-dimensional mapping and regulation of action potential propagation in nanoelectronics-innervated tissues”).

A June 27, 2016 Harvard University press release on EurekAlert, which originated the news item, provides more information,

“I think one of the biggest impacts would ultimately be in the area that involves replaced of damaged cardiac tissue with pre-formed tissue patches,” Lieber said. “Rather than simply implanting an engineered patch built on a passive scaffold, our works suggests it will be possible to surgically implant an innervated patch that would now be able to monitor and subtly adjust its performance.”

Once implanted, Lieber said, the bionic patch could act similarly to a pacemaker – delivering electrical shocks to correct arrhythmia, but the possibilities don’t end there.

“In this study, we’ve shown we can change the frequency and direction of signal propagation,” he continued. “We believe it could be very important for controlling arrhythmia and other cardiac conditions.”

Unlike traditional pacemakers, Lieber said, the bionic patch – because its electronic components are integrated throughout the tissue – can detect arrhythmia far sooner, and operate at far lower voltages.

“Even before a person started to go into large-scale arrhythmia that frequently causes irreversible damage or other heart problems, this could detect the early-stage instabilities and intervene sooner,” he said. “It can also continuously monitor the feedback from the tissue and actively respond.”

“And a normal pacemaker, because it’s on the surface, has to use relatively high voltages,” Lieber added.

The patch might also find use, Lieber said, as a tool to monitor the responses under cardiac drugs, or to help pharmaceutical companies to screen the effectiveness of drugs under development.

Likewise, the bionic cardiac patch can also be a unique platform, he further mentioned, to study the tissue behavior evolving during some developmental processes, such as aging, ischemia or differentiation of stem cells into mature cardiac cells.

Although the bionic cardiac patch has not yet been implanted in animals, “we are interested in identifying collaborators already investigating cardiac patch implantation to treat myocardial infarction in a rodent model,” he said. “I don’t think it would be difficult to build this into a simpler, easily implantable system.”

In the long term, Lieber believes, the development of nanoscale tissue scaffolds represents a new paradigm for integrating biology with electronics in a virtually seamless way.

Using the injectable electronics technology he pioneered last year, Lieber even suggested that similar cardiac patches might one day simply be delivered by injection.

“It may actually be that, in the future, this won’t be done with a surgical patch,” he said. “We could simply do a co-injection of cells with the mesh, and it assembles itself inside the body, so it’s less invasive.”

Here’s a link to and a citation for the paper,

Three-dimensional mapping and regulation of action potential propagation in nanoelectronics-innervated tissues by Xiaochuan Dai, Wei Zhou, Teng Gao, Jia Liu & Charles M. Lieber. Nature Nanotechnology (2016)  doi:10.1038/nnano.2016.96 Published online 27 June 2016

This paper is behind a paywall.

Dexter Johnson in a June 27, 2016 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides more technical detail (Note: Links have been removed),

In research described in the journal Nature Nanotechnology, Lieber and his team employed a bottom-up approach that started with the fabrication of doped p-type silicon nanowires. Lieber has been spearheading the use of silicon nanowires as a scaffold for growing nerve, heart, and muscle tissue for years now.

In this latest work, Lieber and his team fabricated the nanowires, applied them onto a polymer surface, and arranged them into a field-effect transistor (FET). The researchers avoided an increase in the device’s impedance as its dimensions were reduced by adopting this FET approach as opposed to simply configuring the device as an electrode. Each FET, along with its source-drain interconnects, created a 4-micrometer-by-20-micrometer-by-350-nanometer pad. Each of these pads was, in effect, a single recording device.

I recommend reading Dexter’s posting in its entirety as Charles Lieber shares additional technical information not found in the news release.

Hologram with nanostructures could improve fraud protection

This research on holograms comes from Harvard University according to a May 13, 2016 news item on ScienceDaily,

Holograms are a ubiquitous part of our lives. They are in our wallets — protecting credit cards, cash and driver’s licenses from fraud — in grocery store scanners and biomedical devices.

Even though holographic technology has been around for decades, researchers still struggle to make compact holograms more efficient, complex and secure.

Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences have programmed polarization into compact holograms. These holograms use nanostructures that are sensitive to polarization (the direction in which light vibrates) to produce different images depending on the polarization of incident light. This advancement, which works across the spectrum of light, improves anti-fraud holograms as well as those used in entertainment displays.

A May 13, 2016 Harvard University press release (also on EurekAlert) by Leah Burrows, which originated the news item, provides more detail,

“The novelty in this research is that by using nanotechnology, we’ve made holograms that are highly efficient, meaning that very little light is lost to create the image,” said Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering and senior author of the paper. “By using incident polarized light, you can see far a crisper image and can store and retrieve more images. Polarization adds another dimension to holograms that can be used to protect against counterfeiting and in applications like displays.”

Harvard’s Office of Technology Development has filed patents on this and related technologies and is actively pursuing commercial opportunities.

Holograms, like digital photographs, capture a field of light around an object and encode it on a chip. However, photographs only record the intensity of light while holograms also capture the phase of light, which is why holograms appear three-dimensional.

“Our holograms work like any other but the image produced depends on the polarization state of the illuminating light, providing an extra degree of freedom in design for versatile applications,” said Mohammadreza Khorasaninejad, postdoctoral fellow in the Capasso Lab and first author of the paper.

There are several states of polarization. In linearly polarized light the direction of vibration remains constant while in circularly polarized light it rotates clockwise or counterclockwise. The direction of rotation is the chirality.

The team built silicon nanostructured patterns on a glass substrate, which act as superpixels. Each superpixel responds to a certain polarization state of the incident light. Even more information can be encoded in the hologram by designing and arranging the nanofins to respond differently to the chirality of the polarized incident light.

“Being able to encode chirality can have important applications in information security such as anti-counterfeiting,” said Antonio Ambrosio, a research scientist in the Capasso Lab and co-first author. “For example, chiral holograms can be made to display a sequence of certain images only when illuminated with light of specific polarization not known to the forger.”

“By using different nanofin designs in the future, one could store and retrieve far more images by employing light with many states of polarization,” said Capasso.

Because this system is compact, it has application in portable projectors, 3D movies and wearable optics.

“Modern polarization imaging systems require cascading several optical components such as beam splitters, polarizers and wave plates,” said Ambrosio. “Our metasurface can distinguish between incident polarization using a single layer dielectric surface.”

“We have also incorporated in some of the holograms a lens function that has allowed us to produce images at large angles,” said Khorasaninejad. “This functionality combined with the small footprint and lightweight, has significant potential for wearable optics applications.”

Here’s a link to and a citation for the paper,

Broadband and chiral binary dielectric meta-holograms by Mohammadreza Khorasaninejad, Antonio Ambrosio, Pritpal Kanhaiya, and Federico Capasso. Science Advances  13 May 2016: Vol. 2, no. 5, e1501258 DOI: 10.1126/sciadv.1501258

This paper is open access.

Printing in midair

Dexter Johnson’s May 16, 2016 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) was my first introduction to something wonder-inducing (Note: Links have been removed),

While the growth of 3-D printing has led us to believe we can produce just about any structure with it, the truth is that it still falls somewhat short.

Researchers at Harvard University are looking to realize a more complete range of capabilities for 3-D printing in fabricating both planar and freestanding 3-D structures and do it relatively quickly and on low-cost plastic substrates.

In research published in the journal Proceedings of the National Academy of Sciences (PNAS),  the researchers extruded a silver-nanoparticle ink and annealed it with a laser so quickly that the system let them easily “write” free-standing 3-D structures.

While this may sound humdrum, what really takes one’s breath away with this technique is that it can create 3-D structures seemingly suspended in air without any signs of support as though they were drawn there with a pen.

Laser-assisted direct ink writing allowed this delicate 3D butterfly to be printed without any auxiliary support structure (Image courtesy of the Lewis Lab/Harvard University)

Laser-assisted direct ink writing allowed this delicate 3D butterfly to be printed without any auxiliary support structure (Image courtesy of the Lewis Lab/Harvard University)

A May 16, 2016 Harvard University press release (also on EurekAlert) provides more detail about the work,

“Flat” and “rigid” are terms typically used to describe electronic devices. But the increasing demand for flexible, wearable electronics, sensors, antennas and biomedical devices has led a team at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS) and Wyss Institute for Biologically Inspired Engineering to innovate an eye-popping new way of printing complex metallic architectures – as though they are seemingly suspended in midair.

“I am truly excited by this latest advance from our lab, which allows one to 3D print and anneal flexible metal electrodes and complex architectures ‘on-the-fly,’ ” said Lewis [Jennifer Lewis, the Hansjörg Wyss Professor of Biologically Inspired Engineering at SEAS and Wyss Core Faculty member].

Lewis’ team used an ink composed of silver nanoparticles, sending it through a printing nozzle and then annealing it using a precisely programmed laser that applies just the right amount of energy to drive the ink’s solidification. The printing nozzle moves along x, y, and z axes and is combined with a rotary print stage to enable freeform curvature. In this way, tiny hemispherical shapes, spiral motifs, even a butterfly made of silver wires less than the width of a hair can be printed in free space within seconds. The printed wires exhibit excellent electrical conductivity, almost matching that of bulk silver.

When compared to conventional 3D printing techniques used to fabricate conductive metallic features, laser-assisted direct ink writing is not only superior in its ability to produce curvilinear, complex wire patterns in one step, but also in the sense that localized laser heating enables electrically conductive silver wires to be printed directly on low-cost plastic substrates.

According to the study’s first author, Wyss Institute Postdoctoral Fellow Mark Skylar-Scott, Ph.D., the most challenging aspect of honing the technique was optimizing the nozzle-to-laser separation distance.

“If the laser gets too close to the nozzle during printing, heat is conducted upstream which clogs the nozzle with solidified ink,” said Skylar-Scott. “To address this, we devised a heat transfer model to account for temperature distribution along a given silver wire pattern, allowing us to modulate the printing speed and distance between the nozzle and laser to elegantly control the laser annealing process ‘on the fly.’ ”

The result is that the method can produce not only sweeping curves and spirals but also sharp angular turns and directional changes written into thin air with silver inks, opening up near limitless new potential applications in electronic and biomedical devices that rely on customized metallic architectures.

Seeing is believing, eh?

Here’s a link to and a citation for the paper,

Laser-assisted direct ink writing of planar and 3D metal architectures by Mark A. Skylar-Scott, Suman Gunasekaran, and Jennifer A. Lewis. PNAS [Proceedings of the National Academy of Sciences] 2016 doi: 10.1073/pnas.1525131113

I believe this paper is open access.

A question: I wonder what conditions are necessary before you can 3D print something in midair? Much as I’m dying to try this at home, I’m pretty that’s not possible.

Will AI ‘artists’ be able to fool a panel judging entries the Neukom Institute Prizes in Computational Arts?

There’s an intriguing competition taking place at Dartmouth College (US) according to a May 2, 2016 piece on phys.org (Note: Links have been removed),

Algorithms help us to choose which films to watch, which music to stream and which literature to read. But what if algorithms went beyond their jobs as mediators of human culture and started to create culture themselves?

In 1950 English mathematician and computer scientist Alan Turing published a paper, “Computing Machinery and Intelligence,” which starts off by proposing a thought experiment that he called the “Imitation Game.” In one room is a human “interrogator” and in another room a man and a woman. The goal of the game is for the interrogator to figure out which of the unknown hidden interlocutors is the man and which is the woman. This is to be accomplished by asking a sequence of questions with responses communicated either by a third party or typed out and sent back. “Winning” the Imitation Game means getting the identification right on the first shot.

Turing then modifies the game by replacing one interlocutor with a computer, and asks whether a computer will be able to converse sufficiently well that the interrogator cannot tell the difference between it and the human. This version of the Imitation Game has come to be known as the “Turing Test.”

On May 18 [2016] at Dartmouth, we will explore a different area of intelligence, taking up the question of distinguishing machine-generated art. Specifically, in our “Turing Tests in the Creative Arts,” we ask if machines are capable of generating sonnets, short stories, or dance music that is indistinguishable from human-generated works, though perhaps not yet so advanced as Shakespeare, O. Henry or Daft Punk.

The piece on phys.org is a crossposting of a May 2, 2016 article by Michael Casey and Daniel N. Rockmore for The Conversation. The article goes on to describe the competitions,

The dance music competition (“Algorhythms”) requires participants to construct an enjoyable (fun, cool, rad, choose your favorite modifier for having an excellent time on the dance floor) dance set from a predefined library of dance music. In this case the initial random “seed” is a single track from the database. The software package should be able to use this as inspiration to create a 15-minute set, mixing and modifying choices from the library, which includes standard annotations of more than 20 features, such as genre, tempo (bpm), beat locations, chroma (pitch) and brightness (timbre).

In what might seem a stiffer challenge, the sonnet and short story competitions (“PoeTix” and “DigiLit,” respectively) require participants to submit self-contained software packages that upon the “seed” or input of a (common) noun phrase (such as “dog” or “cheese grater”) are able to generate the desired literary output. Moreover, the code should ideally be able to generate an infinite number of different works from a single given prompt.

To perform the test, we will screen the computer-made entries to eliminate obvious machine-made creations. We’ll mix human-generated work with the rest, and ask a panel of judges to say whether they think each entry is human- or machine-generated. For the dance music competition, scoring will be left to a group of students, dancing to both human- and machine-generated music sets. A “winning” entry will be one that is statistically indistinguishable from the human-generated work.

The competitions are open to any and all comers [competition is now closed; the deadline was April 15, 2016]. To date, entrants include academics as well as nonacademics. As best we can tell, no companies have officially thrown their hats into the ring. This is somewhat of a surprise to us, as in the literary realm companies are already springing up around machine generation of more formulaic kinds of “literature,” such as earnings reports and sports summaries, and there is of course a good deal of AI automation around streaming music playlists, most famously Pandora.

The authors discuss issues with judging the entries,

Evaluation of the entries will not be entirely straightforward. Even in the initial Imitation Game, the question was whether conversing with men and women over time would reveal their gender differences. (It’s striking that this question was posed by a closeted gay man [Alan Turing].) The Turing Test, similarly, asks whether the machine’s conversation reveals its lack of humanity not in any single interaction but in many over time.

It’s also worth considering the context of the test/game. Is the probability of winning the Imitation Game independent of time, culture and social class? Arguably, as we in the West approach a time of more fluid definitions of gender, that original Imitation Game would be more difficult to win. Similarly, what of the Turing Test? In the 21st century, our communications are increasingly with machines (whether we like it or not). Texting and messaging have dramatically changed the form and expectations of our communications. For example, abbreviations, misspellings and dropped words are now almost the norm. The same considerations apply to art forms as well.

The authors also pose the question: Who is the artist?

Thinking about art forms leads naturally to another question: who is the artist? Is the person who writes the computer code that creates sonnets a poet? Is the programmer of an algorithm to generate short stories a writer? Is the coder of a music-mixing machine a DJ?

Where is the divide between the artist and the computational assistant and how does the drawing of this line affect the classification of the output? The sonnet form was constructed as a high-level algorithm for creative work – though one that’s executed by humans. Today, when the Microsoft Office Assistant “corrects” your grammar or “questions” your word choice and you adapt to it (either happily or out of sheer laziness), is the creative work still “yours” or is it now a human-machine collaborative work?

That’s an interesting question and one I asked in the context of two ‘mashup’ art exhibitions in Vancouver (Canada) in my March 8, 2016 posting.

Getting back to back to Dartmouth College and its Neukom Institute Prizes in Computational Arts, here’s a list of the competition judges from the competition homepage,

David Cope (Composer, Algorithmic Music Pioneer, UCSC Music Professor)
David Krakauer (President, the Santa Fe Institute)
Louis Menand (Pulitzer Prize winning author and Professor at Harvard University)
Ray Monk (Author, Biographer, Professor of Philosophy)
Lynn Neary (NPR: Correspondent, Arts Desk and Guest Host)
Joe Palca (NPR: Correspondent, Science Desk)
Robert Siegel (NPR: Senior Host, All Things Considered)

The announcements will be made Wednesday, May 18, 2016. I can hardly wait!

Addendum

Martin Robbins has written a rather amusing May 6, 2016 post for the Guardian science blogs on AI and art critics where he also notes that the question: What is art? is unanswerable (Note: Links have been removed),

Jonathan Jones is unhappy about artificial intelligence. It might be hard to tell from a casual glance at the art critic’s recent column, “The digital Rembrandt: a new way to mock art, made by fools,” but if you look carefully the subtle clues are there. His use of the adjectives “horrible, tasteless, insensitive and soulless” in a single sentence, for example.

The source of Jones’s ire is a new piece of software that puts… I’m so sorry… the ‘art’ into ‘artificial intelligence’. By analyzing a subset of Rembrandt paintings that featured ‘bearded white men in their 40s looking to the right’, its algorithms were able to extract the key features that defined the Dutchman’s style. …

Of course an artificial intelligence is the worst possible enemy of a critic, because it has no ego and literally does not give a crap what you think. An arts critic trying to deal with an AI is like an old school mechanic trying to replace the battery in an iPhone – lost, possessing all the wrong tools and ultimately irrelevant. I’m not surprised Jones is angry. If I were in his shoes, a computer painting a Rembrandt would bring me out in hives.
Advertisement

Can a computer really produce art? We can’t answer that without dealing with another question: what exactly is art? …

I wonder what either Robbins or Jones will make of the Dartmouth competition?

Nucleic acid-based memory storage

We’re running out of memory. To be more specific, there are two problems: the supply of silicon and a limit to how much silicon-based memory can store. An April 27, 2016 news item on Nanowerk announces a nucleic acid-based approach to solving the memory problem,

A group of Boise State [Boise State University in Idaho, US] researchers, led by associate professor of materials science and engineering and associate dean of the College of Innovation and Design Will Hughes, is working toward a better way to store digital information using nucleic acid memory (NAM).

An April 25, 2016 Boise State University news release, which originated the news item, expands on the theme of computer memory and provides more details about the approach,

It’s no secret that as a society we generate vast amounts of data each year. So much so that the 30 billion watts of electricity used annually by server farms today is roughly equivalent to the output of 30 nuclear power plants.

And the demand keeps growing. The global flash memory market is predicted to reach $30.2 billion this year, potentially growing to $80.3 billion by 2025. Experts estimate that by 2040, the demand for global memory will exceed the projected supply of silicon (the raw material used to store flash memory). Furthermore, electronic memory is rapidly approaching its fundamental size limits because of the difficulty in storing electrons in small dimensions.

Hughes, with post-doctoral researcher Reza Zadegan and colleagues Victor Zhirnov (Semiconductor Research Corporation), Gurtej Sandhun (Micron Technology Inc.) and George Church (Harvard University), is looking to DNA molecules to solve the problem. Nucleic acid — the “NA” in “DNA” — far surpasses electronic memory in retention time, according to the researchers, while also providing greater information density and energy of operation.

Their conclusions are outlined in an invited commentary in the prestigious journal Nature Materials published earlier this month.

“DNA is the data storage material of life in general,” said Hughes. “Because of its physical and chemical properties, it also may become the data storage material of our lives.” It may sound like science fiction, but Hughes will participate in an invitation-only workshop this month at the Intelligence Advanced Research Projects Activity (IARPA) Agency to envision a portable DNA hard drive that would have 500 Terabytes of searchable data – that’s about the the size of the Library of Congress Web Archive.

“When information bits are encoded into polymer strings, researchers and manufacturers can manage and manipulate physical, chemical and biological information with standard molecular biology techniques,” the paper [in Nature Materials?] states.

Cost-competitive technologies to read and write DNA could lead to real-world applications ranging from artificial chromosomes, digital hard drives and information-management systems, to a platform for watermarking and tracking genetic content or next-generation encryption tools that necessitate physical rather than electronic embodiment.

Here’s how it works. Current binary code uses 0’s and 1’s to represent bits of information. A computer program then accesses a specific decoder to turn the numbers back into usable data. With nucleic acid memory, 0’s and 1’s are replaced with the nucleotides A, T, C and G. Known as monomers, they are covalently bonded to form longer polymer chains, also known as information strings.

Because of DNA’s superior ability to store data, DNA can contain all the information in the world in a small box measuring 10 x 10 x 10 centimeters cubed. NAM could thus be used as a sustainable time capsule for massive, scientific, financial, governmental, historical, genealogical, personal and genetic records.

Better yet, DNA can store digital information for a very long time – thousands to millions of years. Currently, usable information has been extracted from DNA in bones that are 700,000 years old, making nucleic acid memory a promising archival material. And nucleic acid memory uses 100 million times less energy than storing data electronically in flash, and the data can live on for generations.

At Boise State, Hughes and Zadegan are examining DNA’s stability under extreme conditions. DNA strands are subjected to temperatures varying from negative 20 degrees Celsius to 100 degrees Celsius, and to a variety of UV exposures to see if they can still retain their information. What they’re finding is that much less information is lost with NAM than with the current state of the industry.

Here’s a link to and a citation for the Nature Materials paper,

Nucleic acid memory by Victor Zhirnov, Reza M. Zadegan, Gurtej S. Sandhu, George M. Church, & William L. Hughes. Nature Materials 15, 366–370 (2016)  doi:10.1038/nmat4594 Published online 23 March 2016

This paper is behind a paywall.

Tune your windows for privacy

Caption: With an applied voltage, the nanowires on either side of the glass become attracted to each other and move toward each other, squeezing and deforming the soft elastomer. Because the nanowires are scattered unevenly across the surface, the elastomer deforms unevenly. That uneven roughness causes light to scatter, turning the glass opaque. Credit: David Clarke/Harvard SEAS [School of Engineering and Applied Sciences]

Right now, this is my favourite science illustration. A March 14, 2016 news item on Nanowerk announces Harvard’s new technology that can turn a clear window into an opaque one at the touch of a switch,

Say goodbye to blinds.

Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences have developed a technique that can quickly change the opacity of a window, turning it cloudy, clear or somewhere in between with the flick of a switch.

Tunable windows aren’t new but most previous technologies have relied on electrochemical reactions achieved through expensive manufacturing. This technology, developed by David Clarke, the Extended Tarr Family Professor of Materials, and postdoctoral fellow Samuel Shian, uses geometry [to] adjust the transparency of a window.

A March 14, 2016 Harvard University news release (also on EurekAlert) by Leah Burrows, which originated the news item, describes the technology in more detail,

The tunable window is comprised of a sheet of glass or plastic, sandwiched between transparent, soft elastomers sprayed with a coating of silver nanowires, too small to scatter light on their own.

But apply an electric voltage and things change quickly.

With an applied voltage, the nanowires on either side of the glass are energized to move toward each other, squeezing and deforming the soft elastomer. Because the nanowires are distributed unevenly across the surface, the elastomer deforms unevenly. The resulting uneven roughness causes light to scatter, turning the glass opaque.

The change happens in less than a second.

It’s like a frozen pond, said Shian.

“If the frozen pond is smooth, you can see through the ice. But if the ice is heavily scratched, you can’t see through,” said Shian.

Clarke and Shian found that the roughness of the elastomer surface depended on the voltage, so if you wanted a window that is only light clouded, you would apply less voltage than if you wanted a totally opaque window.

“Because this is a physical phenomenon rather than based on a chemical reaction, it is a simpler and potentially cheaper way to achieve commercial tunable windows,” said Clarke.

Current chemical-based controllable windows use vacuum deposition to coat the glass, a process that deposits layers of a material molecule by molecule. It’s expensive and painstaking. In Clarke and Shian’s method, the nanowire layer can be sprayed or peeled onto the elastomer, making the technology scalable for larger architectural projects.

Next the team is working on incorporating thinner elastomers, which would require lower voltages, more suited for standard electronical supplies.

Here’s a link to and a citation for the paper,

Electrically tunable window device by Samuel Shian and David R. Clarke. Optics Letters Vol. 41, Issue 6, pp. 1289-1292 (2016) •doi: 10.1364/OL.41.001289

This is an open access paper.

Namib beetles, cacti, and pitcher plants teach scientists at Harvard University (US)

In this latest work from Harvard University’s Wyss Institute for Biologically Inspired Engineering, scientists have looked at three desert dwellers for survival strategies in water-poor areas. From a Feb. 25, 2015 news item on Nanowerk,

Organisms such as cacti and desert beetles can survive in arid environments because they’ve evolved mechanisms to collect water from thin air. The Namib desert beetle, for example, collects water droplets on the bumps of its shell while V-shaped cactus spines guide droplets to the plant’s body.

As the planet grows drier, researchers are looking to nature for more effective ways to pull water from air. Now, a team of researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering at Harvard University have drawn inspiration from these organisms to develop a better way to promote and transport condensed water droplets.

A Feb. 24, 2016 Harvard University press release by Leah Burrows (also on EurekAlert), which originated the news item, expands on the theme,

“Everybody is excited about bioinspired materials research,” said Joanna Aizenberg, the Amy Smith Berylson Professor of Materials Science at SEAS and core faculty member of the Wyss Institute. “However, so far, we tend to mimic one inspirational natural system at a time. Our research shows that a complex bio-inspired approach, in which we marry multiple biological species to come up with non-trivial designs for highly efficient materials with unprecedented properties, is a new, promising direction in biomimetics.”

The new system, described in Nature, is inspired by the bumpy shell of desert beetles, the asymmetric structure of cactus spines and slippery surfaces of pitcher plants. The material harnesses the power of these natural systems, plus Slippery Liquid-Infused Porous Surfaces technology (SLIPS) developed in Aizenberg’s lab, to collect and direct the flow of condensed water droplets.

This approach is promising not only for harvesting water but also for industrial heat exchangers.

“Thermal power plants, for example, rely on condensers to quickly convert steam to liquid water,” said Philseok Kim, co-author of the paper and co-founder and vice president of technology at SEAS spin-off SLIPS Technologies, Inc. “This design could help speed up that process and even allow for operation at a higher temperature, significantly improving the overall energy efficiency.”

The major challenges in harvesting atmospheric water are controlling the size of the droplets, speed in which they form and the direction in which they flow.

For years, researchers focused on the hybrid chemistry of the beetle’s bumps — a hydrophilic top with hydrophobic surroundings — to explain how the beetle attracted water. However, Aizenberg and her team took inspiration from a different possibility – that convex bumps themselves also might be able to harvest water.

“We experimentally found that the geometry of bumps alone could facilitate condensation,” said Kyoo-Chul Park, a postdoctoral researcher and the first author of the paper. “By optimizing that bump shape through detailed theoretical modeling and combining it with the asymmetry of cactus spines and the nearly friction-free coatings of pitcher plants, we were able to design a material that can collect and transport a greater volume of water in a short time compared to other surfaces.”

“Without one of those parameters, the whole system would not work synergistically to promote both the growth and accelerated directional transport of even small, fast condensing droplets,” said Park.

“This research is an exciting first step towards developing a passive system that can efficiently collect water and guide it to a reservoir,” said Kim.

Here’s a link to and a citation for the paper,

Condensation on slippery asymmetric bumps by Kyoo-Chul Park, Philseok Kim, Alison Grinthal, Neil He, David Fox, James C. Weaver, & Joanna Aizenberg. Nature (2016) doi:10.1038/nature16956 Published online 24 February 2016

This paper is behind a paywall.

I have featured the Namib beetle and its water harvesting capabilities most recently in a July 29, 2014 posting and the most recent story I have about SLIPS is in an Oct. 14, 2014 posting.

Using copyright to shut down easy access to scientific research

This started out as a simple post on copyright and publishers vis à vis Sci-Hub but then John Dupuis wrote a think piece (with which I disagree somewhat) on the situation in a Feb. 22, 2016 posting on his blog, Confessions of a Science Librarian. More on Dupuis and my take on it after a description of the situation.

Sci-Hub

Before getting to the controversy and legal suit, here’s a preamble about the purpose for copyright as per the US constitution from Mike Masnick’s Feb. 17, 2016 posting on Techdirt,

Lots of people are aware of the Constitutional underpinnings of our copyright system. Article 1, Section 8, Clause 8 famously says that Congress has the following power:

To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.

We’ve argued at great length over the importance of the preamble of that section, “to promote the progress,” but many people are confused about the terms “science” and “useful arts.” In fact, many people not well-versed in the issue often get the two backwards and think that “science” refers to inventions, and thus enables a patent system, while “useful arts” refers to “artistic works” and thus enables the copyright system. The opposite is actually the case. “Science” at the time the Constitution was written was actually synonymous with “learning” and “education” (while “useful arts” was a term meaning invention and new productivity tools).

While over the centuries, many who stood to benefit from an aggressive system of copyright control have tried to rewrite, whitewash or simply ignore this history, turning the copyright system falsely into a “property” regime, the fact is that it was always intended as a system to encourage the wider dissemination of ideas for the purpose of education and learning. The (potentially misguided) intent appeared to be that by granting exclusive rights to a certain limited class of works, it would encourage the creation of those works, which would then be useful in educating the public (and within a few decades enter the public domain).

Masnick’s preamble leads to a case where Elsevier (Publishers) has attempted to halt the very successful Sci-Hub, which bills itself as “the first pirate website in the world to provide mass and public access to tens of millions of research papers.” From Masnick’s Feb. 17, 2016 posting,

Rightfully, this is being celebrated as a massive boon to science and learning, making these otherwise hidden nuggets of knowledge and science that were previously locked up and hidden away available to just about anyone. And, to be clear, this absolutely fits with the original intent of copyright law — which was to encourage such learning. In a very large number of cases, it is not the creators of this content and knowledge who want the information to be locked up. Many researchers and academics know that their research has much more of an impact the wider it is seen, read, shared and built upon. But the gatekeepers — such as Elsveier and other large academic publishers — have stepped in and demanded copyright, basically for doing very little.

They do not pay the researchers for their work. Often, in fact, that work is funded by taxpayer funds. In some cases, in certain fields, the publishers actually demand that the authors of these papers pay to submit them. The journals do not pay to review the papers either. They outsource that work to other academics for “peer review” — which again, is unpaid. Finally, these publishers profit massively, having convinced many universities that they need to subscribe, often paying many tens or even hundreds of thousands of dollars for subscriptions to journals that very few actually read.

Simon Oxenham of the Neurobonkers blog on the big think website wrote a Feb. 9 (?), 2016 post about Sci-Hub, its originator, and its current legal fight (Note: Links have been removed),

On September 5th, 2011, Alexandra Elbakyan, a researcher from Kazakhstan, created Sci-Hub, a website that bypasses journal paywalls, illegally providing access to nearly every scientific paper ever published immediately to anyone who wants it. …

This was a game changer. Before September 2011, there was no way for people to freely access paywalled research en masse; researchers like Elbakyan were out in the cold. Sci-Hub is the first website to offer this service and now makes the process as simple as the click of a single button.

As the number of papers in the LibGen database expands, the frequency with which Sci-Hub has to dip into publishers’ repositories falls and consequently the risk of Sci-Hub triggering its alarm bells becomes ever smaller. Elbakyan explains, “We have already downloaded most paywalled articles to the library … we have almost everything!” This may well be no exaggeration. Elsevier, one of the most prolific and controversial scientific publishers in the world, recently alleged in court that Sci-Hub is currently harvesting Elsevier content at a rate of thousands of papers per day. Elbakyan puts the number of papers downloaded from various publishers through Sci-Hub in the range of hundreds of thousands per day, delivered to a running total of over 19 million visitors.

In one fell swoop, a network has been created that likely has a greater level of access to science than any individual university, or even government for that matter, anywhere in the world. Sci-Hub represents the sum of countless different universities’ institutional access — literally a world of knowledge. This is important now more than ever in a world where even Harvard University can no longer afford to pay skyrocketing academic journal subscription fees, while Cornell axed many of its Elsevier subscriptions over a decade ago. For researchers outside the US’ and Western Europe’s richest institutions, routine piracy has long been the only way to conduct science, but increasingly the problem of unaffordable journals is coming closer to home.

… This was the experience of Elbakyan herself, who studied in Kazakhstan University and just like other students in countries where journal subscriptions are unaffordable for institutions, was forced to pirate research in order to complete her studies. Elbakyan told me, “Prices are very high, and that made it impossible to obtain papers by purchasing. You need to read many papers for research, and when each paper costs about 30 dollars, that is impossible.”

While Sci-Hub is not expected to win its case in the US, where one judge has already ordered a preliminary injunction making its former domain unavailable. (Sci-Hub moved.) Should you be sympathetic to Elsevier, you may want to take this into account (Note: Links have been removed),

Elsevier is the world’s largest academic publisher and by far the most controversial. Over 15,000 researchers have vowed to boycott the publisher for charging “exorbitantly high prices” and bundling expensive, unwanted journals with essential journals, a practice that allegedly is bankrupting university libraries. Elsevier also supports SOPA and PIPA, which the researchers claim threatens to restrict the free exchange of information. Elsevier is perhaps most notorious for delivering takedown notices to academics, demanding them to take their own research published with Elsevier off websites like Academia.edu.

The movement against Elsevier has only gathered speed over the course of the last year with the resignation of 31 editorial board members from the Elsevier journal Lingua, who left in protest to set up their own open-access journal, Glossa. Now the battleground has moved from the comparatively niche field of linguistics to the far larger field of cognitive sciences. Last month, a petition of over 1,500 cognitive science researchers called on the editors of the Elsevier journal Cognition to demand Elsevier offer “fair open access”. Elsevier currently charges researchers $2,150 per article if researchers wish their work published in Cognition to be accessible by the public, a sum far higher than the charges that led to the Lingua mutiny.

In her letter to Sweet [New York District Court Judge Robert W. Sweet], Elbakyan made a point that will likely come as a shock to many outside the academic community: Researchers and universities don’t earn a single penny from the fees charged by publishers [emphasis mine] such as Elsevier for accepting their work, while Elsevier has an annual income over a billion U.S. dollars.

As Masnick noted, much of this research is done on the public dime (i. e., funded by taxpayers). For her part, Elbakyan has written a letter defending her actions on ethical rather than legal grounds.

I recommend reading the Oxenham article as it provides details about how the site works and includes text from the letter Elbakyan wrote.  For those who don’t have much time, Masnick’s post offers a good précis.

Sci-Hub suit as a distraction from the real issues?

Getting to Dupuis’ Feb. 22, 2016 posting and his perspective on the situation,

My take? Mostly that it’s a sideshow.

One aspect that I have ranted about on Twitter which I think is worth mentioning explicitly is that I think Elsevier and all the other big publishers are actually quite happy to feed the social media rage machine with these whack-a-mole controversies. The controversies act as a sideshow, distracting from the real issues and solutions that they would prefer all of us not to think about.

By whack-a-mole controversies I mean this recurring story of some person or company or group that wants to “free” scholarly articles and then gets sued or harassed by the big publishers or their proxies to force them to shut down. This provokes wide outrage and condemnation aimed at the publishers, especially Elsevier who is reserved a special place in hell according to most advocates of openness (myself included).

In other words: Elsevier and its ilk are thrilled to be the target of all the outrage. Focusing on the whack-a-mole game distracts us from fixing the real problem: the entrenched systems of prestige, incentive and funding in academia. As long as researchers are channelled into “high impact” journals, as long as tenure committees reward publishing in closed rather than open venues, nothing will really change. Until funders get serious about mandating true open access publishing and are willing to put their money where their intentions are, nothing will change. Or at least, progress will be mostly limited to surface victories rather than systemic change.

I think Dupuis is referencing a conflict theory (I can’t remember what it’s called) which suggests that certain types of conflicts help to keep systems in place while apparently attacking those systems. His point is well made but I disagree somewhat in that I think these conflicts can also raise awareness and activate people who might otherwise ignore or mindlessly comply with those systems. So, if Elsevier and the other publishers are using these legal suits as diversionary tactics, they may find they’ve made a strategic error.

ETA April 29, 2016: Sci-Hub does seem to move around so I’ve updated the links so it can be accessed but Sci-Hub’s situation can change at any moment.

Graphene like water

This is graphene research from Harvard University and Raytheon according to a Feb. 11, 2016 news item on phys.org (Note: Links have been removed),

It’s one atom thick [i.e., two-dimensional], stronger than steel, harder than diamond and one of the most conductive materials on earth.

But, several challenges must be overcome before graphene products are brought to market. Scientists are still trying to understand the basic physics of this unique material. Also, it’s very challenging to make and even harder to make without impurities.

In a new paper published in Science, researchers at the [sic] Harvard and Raytheon BBN Technology have advanced our understanding of graphene’s basic properties, observing for the first time electrons in a metal behaving like a fluid.

A Feb. 11, 2016 Harvard University press release by Leah Burrows (also on EurekAlert), which originated the news item, provides more detail,

In order to make this observation, the team improved methods to create ultra-clean graphene and developed a new way measure its thermal conductivity. This research could lead to novel thermoelectric devices as well as provide a model system to explore exotic phenomena like black holes and high-energy plasmas.

An electron super highway

In ordinary, three-dimensional metals, electrons hardly interact with each other. But graphene’s two-dimensional, honeycomb structure acts like an electron superhighway in which all the particles have to travel in the same lane. The electrons in graphene act like massless relativistic objects, some with positive charge and some with negative charge. They move at incredible speed — 1/300 of the speed of light — and have been predicted to collide with each other ten trillion times a second at room temperature.  These intense interactions between charge particles have never been observed in an ordinary metal before.

The team created an ultra-clean sample by sandwiching the one-atom thick graphene sheet between tens of layers of an electrically insulating perfect transparent crystal with a similar atomic structure of graphene.

“If you have a material that’s one atom thick, it’s going to be really affected by its environment,” said Jesse Crossno, a graduate student in the Kim Lab [Philip Kim, professor of physics and applied physics] and first author of the paper.  “If the graphene is on top of something that’s rough and disordered, it’s going to interfere with how the electrons move. It’s really important to create graphene with no interference from its environment.”

The technique was developed by Kim and his collaborators at Columbia University before he moved to Harvard in 2014 and now have been perfected in his lab at SEAS [Harvard School of Engineering and Applied Sciences].

Next, the team set up a kind of thermal soup of positively charged and negatively charged particles on the surface of the graphene, and observed how those particles flowed as thermal and electric currents.

What they observed flew in the face of everything they knew about metals.

A black hole on a chip

Most of our world — how water flows or how a curve ball curves —  is described by classical physics. Very small things, like electrons, are described by quantum mechanics while very large and very fast things, like galaxies, are described by relativistic physics, pioneered by Albert Einstein.

Combining these laws of physics is notoriously difficult but there are extreme examples where they overlap. High-energy systems like supernovas and black holes can be described by linking classical theories of hydrodynamics with Einstein’s theories of relativity.

But it’s difficult to run an experiment on a black hole. Enter graphene.

When the strongly interacting particles in graphene were driven by an electric field, they behaved not like individual particles but like a fluid that could be described by hydrodynamics.

“Instead of watching how a single particle was affected by an electric or thermal force, we could see the conserved energy as it flowed across many particles, like a wave through water,” said Crossno.

“Physics we discovered by studying black holes and string theory, we’re seeing in graphene,” said Andrew Lucas, co-author and graduate student with Subir Sachdev, the Herchel Smith Professor of Physics at Harvard. “This is the first model system of relativistic hydrodynamics in a metal.”

Moving forward, a small chip of graphene could be used to model the fluid-like behavior of other high-energy systems.

Industrial implications

So we now know that strongly interacting electrons in graphene behave like a liquid — how does that advance the industrial applications of graphene?

First, in order to observe the hydrodynamic system, the team needed to develop a precise way to measure how well electrons in the system carry heat.  It’s very difficult to do, said co-PI Kin Chung Fong, scientist with Raytheon BBN Technology.

Materials conduct heat in two ways: through vibrations in the atomic structure or lattice; and carried by the electrons themselves.

“We needed to find a clever way to ignore the heat transfer from the lattice and focus only on how much heat is carried by the electrons,” Fong said.

To do so, the team turned to noise. At finite temperature, the electrons move about randomly:  the higher the temperature, the noisier the electrons. By measuring the temperature of the electrons to three decimal points, the team was able to precisely measure the thermal conductivity of the electrons.

“This work provides a new way to control the rate of heat transduction in graphene’s electron system, and as such will be key for energy and sensing-related applications,” said Leonid Levitov, professor of physics at MIT [Massachusetts Institute of Technology].

“Converting thermal energy into electric currents and vice versa is notoriously hard with ordinary materials,” said Lucas. “But in principle, with a clean sample of graphene there may be no limit to how good a device you could make.”

Here’s a link to and a citation for the paper,

Observation of the Dirac fluid and the breakdown of the Wiedemann-Franz law in graphene by Jesse Crossno, Jing K. Shi, Ke Wang, Xiaomeng Liu, Achim Harzheim, Andrew Lucas, Subir Sachdev, Philip Kim, Takashi Taniguchi, Kenji Watanabe, Thomas A. Ohki, Kin Chung Fong.Science  11 Feb 2016: pp. DOI: 10.1126/science.aad0343

This paper is behind a paywall.

Here’s an image illustrating the research,

Caption: In a new paper published in Science, researchers at the Harvard and Raytheon BBN Technology have advanced our understanding of graphene's basic properties, observing for the first time electrons in a metal behaving like a fluid. Credit: Peter Allen/Harvard SEAS

Caption: In a new paper published in Science, researchers at the Harvard and Raytheon BBN Technology have advanced our understanding of graphene’s basic properties, observing for the first time electrons in a metal behaving like a fluid. Credit: Peter Allen/Harvard SEAS