Monthly Archives: May 2015

Synthesizing nerve tissues with 3D printers and cellulose nanocrystals (CNC)

There are lots of stories about bioprinting and tissue engineering here and I think it’s time (again) for one which one has some good, detailed descriptions and, bonus, it features cellulose nanocrystals (CNC) and graphene. From a May 13, 2015 news item on Azonano,

The printer looks like a toaster oven with the front and sides removed. Its metal frame is built up around a stainless steel circle lit by an ultraviolet light. Stainless steel hydraulics and thin black tubes line the back edge, which lead to an inner, topside box made of red plastic.

In front, the metal is etched with the red Bio Bot logo. All together, the gray metal frame is small enough to fit on top of an old-fashioned school desk, but nothing about this 3D printer is old school. In fact, the tissue-printing machine is more like a sci-fi future in the flesh—and it has very real medical applications.

Researchers at Michigan Technological University hope to use this newly acquired 3D bioprinter to make synthesized nerve tissue. The key is developing the right “bioink” or printable tissue. The nanotechnology-inspired material could help regenerate damaged nerves for patients with spinal cord injuries, says Tolou Shokuhfar, an assistant professor of mechanical engineering and biomedical engineering at Michigan Tech.

Shokuhfar directs the In-Situ Nanomedicine and Nanoelectronics Laboratory at Michigan Tech, and she is an adjunct assistant professor in the Bioengineering Department and the College of Dentistry at the University of Illinois at Chicago.

In the bioprinting research, Shokuhfar collaborates with Reza Shahbazian-Yassar, the Richard and Elizabeth Henes Associate Professor in the Department of Mechanical Engineering-Engineering Mechanics at Michigan Tech. Shahbazian-Yassar’s highly interdisciplinary background on cellulose nanocrystals as biomaterials, funded by the National Science Foundation’s (NSF) Biomaterials Program, helped inspire the lab’s new 3D printing research. “Cellulose nanocrystals with extremely good mechanical properties are highly desirable for bioprinting of scaffolds that can be used for live tissues,” says Shahbazian-Yassar. [emphases mine]

A May 11, 2015 Michigan Technological University (MTU) news release by Allison Mills, which originated the news item, explains the ‘why’ of the research,

“We wanted to target a big issue,” Shokuhfar says, explaining that nerve regeneration is a particularly difficult biomedical engineering conundrum. “We are born with all the nerve cells we’ll ever have, and damaged nerves don’t heal very well.”

Other facilities are trying to address this issue as well. Many feature large, room-sized machines that have built-in cell culture hoods, incubators and refrigeration. The precision of this equipment allows them to print full organs. But innovation is more nimble at smaller scales.

“We can pursue nerve regeneration research with a simpler printer set-up,” says Shayan Shafiee, a PhD student working with Shokuhfar. He gestures to the small gray box across the lab bench.

He opens the red box under the top side of the printer’s box. Inside the plastic casing, a large syringe holds a red jelly-like fluid. Shafiee replenishes the needle-tipped printer, pulls up his laptop and, with a hydraulic whoosh, he starts to print a tissue scaffold.

The news release expands on the theme,

At his lab bench in the nanotechnology lab at Michigan Tech, Shafiee holds up a petri dish. Inside is what looks like a red gummy candy, about the size of a half-dollar.

Here’s a video from MTU illustrating the printing process,

Back to the news release, which notes graphene could be instrumental in this research,

“This is based on fractal geometry,” Shafiee explains, pointing out the small crenulations and holes pockmarking the jelly. “These are similar to our vertebrae—the idea is to let a nerve pass through the holes.”

Making the tissue compatible with nerve cells begins long before the printer starts up. Shafiee says the first step is to synthesize a biocompatible polymer that is syrupy—but not too thick—that can be printed. That means Shafiee and Shokuhfar have to create their own materials to print with; there is no Amazon.com or even a specialty shop for bioprinting nerves.

Nerves don’t just need a biocompatible tissue to act as a carrier for the cells. Nerve function is all about electric pulses. This is where Shokuhfar’s nanotechnology research comes in: Last year, she was awarded a CAREER grant from NSF for her work using graphene in biomaterials research. [emphasis mine] “Graphene is a wonder material,” she says. “And it has very good electrical conductivity properties.”

The team is extending the application of this material for nerve cell printing. “Our work always comes back to the question, is it printable or not?” Shafiee says, adding that a successful material—a biocompatible, graphene-bound polymer—may just melt, mush or flat out fail under the pressure of printing. After all, imagine building up a substance more delicate than a soufflé using only the point of a needle. And in the nanotechnology world, a needlepoint is big, even clumsy.

Shafiee and Shokuhfar see these issues as mechanical obstacles that can be overcome.

“It’s like other 3D printers, you need a design to work from,” Shafiee says, adding that he will tweak and hone the methodology for printing nerve cells throughout his dissertation work. He is also hopeful that the material will have use beyond nerve regeneration.

This looks like a news release designed to publicize work funded at MTU by the US National Science Foundation (NSF) which is why there is no mention of published work.

One final comment regarding cellulose nanocrystals (CNC). They have also been called nanocrystalline cellulose (NCC), which you will still see but it seems CNC is emerging as the generic term. NCC has been trademarked by CelluForce, a Canadian company researching and producing CNC (or if you prefer, NCC) from forest products.

US National Institute of Standards and Technology (NIST) and its whispering gallery for graphene electrons

I like this old introduction about research that invoked whispering galleries well enough to reuse it here. From a Feb. 8, 2012 post about whispering galleries for light,

Whispering galleries are always popular with all ages. I know that because I can never get enough time in them as I jostle with seniors, children, young adults, etc. For most humans, the magic of having someone across from you on the other side of the room sound as if they’re beside you whispering in your ear is ever fresh.

According to a May 12, 2015 news item on Nanowerk, the US Institute of National Standards and Technology’s (NIST) whispering gallery is not likely to cause any jostling for space as it exists at the nanoscale,

An international research group led by scientists at the U.S. Commerce Department’s National Institute of Standards and Technology (NIST) has developed a technique for creating nanoscale whispering galleries for electrons in graphene. The development opens the way to building devices that focus and amplify electrons just as lenses focus light and resonators (like the body of a guitar) amplify sound.

The NIST has provided a rather intriguing illustration of this work,

Caption: An international research group led by scientists at NIST has developed a technique for creating nanoscale whispering galleries for electrons in graphene. The researchers used the voltage from a scanning tunneling microscope (right) to push graphene electrons out of a nanoscale area to create the whispering gallery (represented by the protuberances on the left), which is like a circular wall of mirrors to the electron. credit: Jon Wyrick, CNST/NIST

Caption: An international research group led by scientists at NIST has developed a technique for creating nanoscale whispering galleries for electrons in graphene. The researchers used the voltage from a scanning tunneling microscope (right) to push graphene electrons out of a nanoscale area to create the whispering gallery (represented by the protuberances on the left), which is like a circular wall of mirrors to the electron.
credit: Jon Wyrick, CNST/NIST

A May 8, 2015 NIST news release, which originated the news item, gives a delightful introduction to whispering galleries and more details about this research (Note: Links have been removed),

In some structures, such as the dome in St. Paul’s Cathedral in London, a person standing near a curved wall can hear the faintest sound made along any other part of that wall. This phenomenon, called a whispering gallery, occurs because sound waves will travel along a curved surface much farther than they will along a flat one. Using this same principle, scientists have built whispering galleries for light waves as well, and whispering galleries are found in applications ranging from sensing, spectroscopy and communications to the generation of laser frequency combs.

“The cool thing is that we made a nanometer scale electronic analogue of a classical wave effect,” said NIST researcher Joe Stroscio. “These whispering galleries are unlike anything you see in any other electron based system, and that’s really exciting.”

Ever since graphene, a single layer of carbon atoms arranged in a honeycomb lattice, was first created in 2004, the material has impressed researchers with its strength, ability to conduct electricity and heat and many interesting optical, magnetic and chemical properties.

However, early studies of the behavior of electrons in graphene were hampered by defects in the material. As the manufacture of clean and near-perfect graphene becomes more routine, scientists are beginning to uncover its full potential.

When moving electrons encounter a potential barrier in conventional semiconductors, it takes an increase in energy for the electron to continue flowing. As a result, they are often reflected, just as one would expect from a ball-like particle.

However, because electrons can sometimes behave like a wave, there is a calculable chance that they will ignore the barrier altogether, a phenomenon called tunneling. Due to the light-like properties of graphene electrons, they can pass through unimpeded—no matter how high the barrier—if they hit the barrier head on. This tendency to tunnel makes it hard to steer electrons in graphene.

Enter the graphene electron whispering gallery.

To create a whispering gallery in graphene, the team first enriched the graphene with electrons from a conductive plate mounted below it. With the graphene now crackling with electrons, the research team used the voltage from a scanning tunneling microscope (STM) to push some of them out of a nanoscale-sized area. This created the whispering gallery, which is like a circular wall of mirrors to the electron.

“An electron that hits the step head-on can tunnel straight through it,” said NIST researcher Nikolai Zhitenev. “But if electrons hit it at an angle, their waves can be reflected and travel along the sides of the curved walls of the barrier until they began to interfere with one another, creating a nanoscale electronic whispering gallery mode.”

The team can control the size and strength, i.e., the leakiness, of the electronic whispering gallery by varying the STM tip’s voltage. The probe not only creates whispering gallery modes, but can detect them as well.

NIST researcher Yue Zhao fabricated the high mobility device and performed the measurements with her colleagues Fabian Natterer and Jon Wyrick. A team of theoretical physicists from the Massachusetts Institute of Technology developed the theory describing whispering gallery modes in graphene.

Here’s a link to and a citation for the paper,

Creating and probing electron whispering-gallery modes in graphene by Yue Zhao, Jonathan Wyrick, Fabian D. Natterer1, Joaquin F. Rodriguez-Nieva, Cyprian Lewandowski, Kenji Watanabe, Takashi Taniguchi, Leonid S. Levitov, Nikolai B. Zhitenev, & Joseph A. Stroscio. Science 8 May 2015:
Vol. 348 no. 6235 pp. 672-675 DOI: 10.1126/science.aaa7469

This paper is behind a paywall.

Memristor, memristor, you are popular

Regular readers know I have a long-standing interest in memristor and artificial brains. I have three memristor-related pieces of research,  published in the last month or so, for this post.

First, there’s some research into nano memory at RMIT University, Australia, and the University of California at Santa Barbara (UC Santa Barbara). From a May 12, 2015 news item on ScienceDaily,

RMIT University researchers have mimicked the way the human brain processes information with the development of an electronic long-term memory cell.

Researchers at the MicroNano Research Facility (MNRF) have built the one of the world’s first electronic multi-state memory cell which mirrors the brain’s ability to simultaneously process and store multiple strands of information.

The development brings them closer to imitating key electronic aspects of the human brain — a vital step towards creating a bionic brain — which could help unlock successful treatments for common neurological conditions such as Alzheimer’s and Parkinson’s diseases.

A May 11, 2015 RMIT University news release, which originated the news item, reveals more about the researchers’ excitement and about the research,

“This is the closest we have come to creating a brain-like system with memory that learns and stores analog information and is quick at retrieving this stored information,” Dr Sharath said.

“The human brain is an extremely complex analog computer… its evolution is based on its previous experiences, and up until now this functionality has not been able to be adequately reproduced with digital technology.”

The ability to create highly dense and ultra-fast analog memory cells paves the way for imitating highly sophisticated biological neural networks, he said.

The research builds on RMIT’s previous discovery where ultra-fast nano-scale memories were developed using a functional oxide material in the form of an ultra-thin film – 10,000 times thinner than a human hair.

Dr Hussein Nili, lead author of the study, said: “This new discovery is significant as it allows the multi-state cell to store and process information in the very same way that the brain does.

“Think of an old camera which could only take pictures in black and white. The same analogy applies here, rather than just black and white memories we now have memories in full color with shade, light and texture, it is a major step.”

While these new devices are able to store much more information than conventional digital memories (which store just 0s and 1s), it is their brain-like ability to remember and retain previous information that is exciting.

“We have now introduced controlled faults or defects in the oxide material along with the addition of metallic atoms, which unleashes the full potential of the ‘memristive’ effect – where the memory element’s behaviour is dependent on its past experiences,” Dr Nili said.

Nano-scale memories are precursors to the storage components of the complex artificial intelligence network needed to develop a bionic brain.

Dr Nili said the research had myriad practical applications including the potential for scientists to replicate the human brain outside of the body.

“If you could replicate a brain outside the body, it would minimise ethical issues involved in treating and experimenting on the brain which can lead to better understanding of neurological conditions,” Dr Nili said.

The research, supported by the Australian Research Council, was conducted in collaboration with the University of California Santa Barbara.

Here’s a link to and a citation for this memristive nano device,

Donor-Induced Performance Tuning of Amorphous SrTiO3 Memristive Nanodevices: Multistate Resistive Switching and Mechanical Tunability by  Hussein Nili, Sumeet Walia, Ahmad Esmaielzadeh Kandjani, Rajesh Ramanathan, Philipp Gutruf, Taimur Ahmed, Sivacarendran Balendhran, Vipul Bansal, Dmitri B. Strukov, Omid Kavehei, Madhu Bhaskaran, and Sharath Sriram. Advanced Functional Materials DOI: 10.1002/adfm.201501019 Article first published online: 14 APR 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

The second published piece of memristor-related research comes from a UC Santa Barbara and  Stony Brook University (New York state) team but is being publicized by UC Santa Barbara. From a May 11, 2015 news item on Nanowerk (Note: A link has been removed),

In what marks a significant step forward for artificial intelligence, researchers at UC Santa Barbara have demonstrated the functionality of a simple artificial neural circuit (Nature, “Training and operation of an integrated neuromorphic network based on metal-oxide memristors”). For the first time, a circuit of about 100 artificial synapses was proved to perform a simple version of a typical human task: image classification.

A May 11, 2015 UC Santa Barbara news release (also on EurekAlert)by Sonia Fernandez, which originated the news item, situates this development within the ‘artificial brain’ effort while describing it in more detail (Note: A link has been removed),

“It’s a small, but important step,” said Dmitri Strukov, a professor of electrical and computer engineering. With time and further progress, the circuitry may eventually be expanded and scaled to approach something like the human brain’s, which has 1015 (one quadrillion) synaptic connections.

For all its errors and potential for faultiness, the human brain remains a model of computational power and efficiency for engineers like Strukov and his colleagues, Mirko Prezioso, Farnood Merrikh-Bayat, Brian Hoskins and Gina Adam. That’s because the brain can accomplish certain functions in a fraction of a second what computers would require far more time and energy to perform.

… As you read this, your brain is making countless split-second decisions about the letters and symbols you see, classifying their shapes and relative positions to each other and deriving different levels of meaning through many channels of context, in as little time as it takes you to scan over this print. Change the font, or even the orientation of the letters, and it’s likely you would still be able to read this and derive the same meaning.

In the researchers’ demonstration, the circuit implementing the rudimentary artificial neural network was able to successfully classify three letters (“z”, “v” and “n”) by their images, each letter stylized in different ways or saturated with “noise”. In a process similar to how we humans pick our friends out from a crowd, or find the right key from a ring of similar keys, the simple neural circuitry was able to correctly classify the simple images.

“While the circuit was very small compared to practical networks, it is big enough to prove the concept of practicality,” said Merrikh-Bayat. According to Gina Adam, as interest grows in the technology, so will research momentum.

“And, as more solutions to the technological challenges are proposed the technology will be able to make it to the market sooner,” she said.

Key to this technology is the memristor (a combination of “memory” and “resistor”), an electronic component whose resistance changes depending on the direction of the flow of the electrical charge. Unlike conventional transistors, which rely on the drift and diffusion of electrons and their holes through semiconducting material, memristor operation is based on ionic movement, similar to the way human neural cells generate neural electrical signals.

“The memory state is stored as a specific concentration profile of defects that can be moved back and forth within the memristor,” said Strukov. The ionic memory mechanism brings several advantages over purely electron-based memories, which makes it very attractive for artificial neural network implementation, he added.

“For example, many different configurations of ionic profiles result in a continuum of memory states and hence analog memory functionality,” he said. “Ions are also much heavier than electrons and do not tunnel easily, which permits aggressive scaling of memristors without sacrificing analog properties.”

This is where analog memory trumps digital memory: In order to create the same human brain-type functionality with conventional technology, the resulting device would have to be enormous — loaded with multitudes of transistors that would require far more energy.

“Classical computers will always find an ineluctable limit to efficient brain-like computation in their very architecture,” said lead researcher Prezioso. “This memristor-based technology relies on a completely different way inspired by biological brain to carry on computation.”

To be able to approach functionality of the human brain, however, many more memristors would be required to build more complex neural networks to do the same kinds of things we can do with barely any effort and energy, such as identify different versions of the same thing or infer the presence or identity of an object not based on the object itself but on other things in a scene.

Potential applications already exist for this emerging technology, such as medical imaging, the improvement of navigation systems or even for searches based on images rather than on text. The energy-efficient compact circuitry the researchers are striving to create would also go a long way toward creating the kind of high-performance computers and memory storage devices users will continue to seek long after the proliferation of digital transistors predicted by Moore’s Law becomes too unwieldy for conventional electronics.

Here’s a link to and a citation for the paper,

Training and operation of an integrated neuromorphic network based on metal-oxide memristors by M. Prezioso, F. Merrikh-Bayat, B. D. Hoskins, G. C. Adam, K. K. Likharev,    & D. B. Strukov. Nature 521, 61–64 (07 May 2015) doi:10.1038/nature14441

This paper is behind a paywall but a free preview is available through ReadCube Access.

The third and last piece of research, which is from Rice University, hasn’t received any publicity yet, unusual given Rice’s very active communications/media department. Here’s a link to and a citation for their memristor paper,

2D materials: Memristor goes two-dimensional by Jiangtan Yuan & Jun Lou. Nature Nanotechnology 10, 389–390 (2015) doi:10.1038/nnano.2015.94 Published online 07 May 2015

This paper is behind a paywall but a free preview is available through ReadCube Access.

Dexter Johnson has written up the RMIT research (his May 14, 2015 post on the Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers] website). He linked it to research from Mark Hersam’s team at Northwestern University (my April 10, 2015 posting) on creating a three-terminal memristor enabling its use in complex electronics systems. Dexter strongly hints in his headline that these developments could lead to bionic brains.

For those who’d like more memristor information, this June 26, 2014 posting which brings together some developments at the University of Michigan and information about developments in the industrial sector is my suggestion for a starting point. Also, you may want to check out my material on HP Labs, especially prominent in the story due to the company’s 2008 ‘discovery’ of the memristor, described on a page in my Nanotech Mysteries wiki, and the controversy triggered by the company’s terminology (there’s more about the controversy in my April 7, 2010 interview with Forrest H Bennett III).

Microbattery from the University of Illinois

Caption: This is an image of the holographically patterned microbattery. Credit: University of Illinois

Caption: This is an image of the holographically patterned microbattery.
Credit: University of Illinois

Hard to believe that’s a battery but the researchers at the University of Illinois  assure us this is so according to a May 11, 2015 news item on Nanowerk,

By combining 3D holographic lithography and 2D photolithography, researchers from the University of Illinois at Urbana-Champaign have demonstrated a high-performance 3D microbattery suitable for large-scale on-chip integration with microelectronic devices.

“This 3D microbattery has exceptional performance and scalability, and we think it will be of importance for many applications,” explained Paul Braun, a professor of materials science and engineering at Illinois. “Micro-scale devices typically utilize power supplied off-chip because of difficulties in miniaturizing energy storage technologies. A miniaturized high-energy and high-power on-chip battery would be highly desirable for applications including autonomous microscale actuators, distributed wireless sensors and transmitters, monitors, and portable and implantable medical devices.”

A May 11, 2015 University of Illinois news release on EurkeAlert, which originated the news item, provides some insight into and detail about the research,

“Due to the complexity of 3D electrodes, it is generally difficult to realize such batteries, let alone the possibility of on-chip integration and scaling. In this project, we developed an effective method to make high-performance 3D lithium-ion microbatteries using processes that are highly compatible with the fabrication of microelectronics,” stated Hailong Ning, a graduate student in the Department of Materials Science and Engineering and first author of the article, “Holographic Patterning of High Performance on-chip 3D Lithium-ion Microbatteries,” appearing in Proceedings of the National Academy of Sciences.

“We utilized 3D holographic lithography to define the interior structure of electrodes and 2D photolithography to create the desired electrode shape.” Ning added. “This work merges important concepts in fabrication, characterization, and modeling, showing that the energy and power of the microbattery are strongly related to the structural parameters of the electrodes such as size, shape, surface area, porosity, and tortuosity. A significant strength of this new method is that these parameters can be easily controlled during lithography steps, which offers unique flexibility for designing next-generation on-chip energy storage devices.”

Enabled by a 3D holographic patterning technique–where multiple optical beams interfere inside the photoresist creating a desirable 3D structure–the battery possesses well-defined, periodically structured porous electrodes, that facilitates the fast transports of electrons and ions inside the battery, offering supercapacitor-like power.

“Although accurate control on the interfering optical beams is required to construct 3D holographic lithography, recent advances have significantly simplified the required optics, enabling creation of structures via a single incident beam and standard photoresist processing. This makes it highly scalable and compatible with microfabrication,” stated John Rogers, a professor of materials science and engineering, who has worked with Braun and his team to develop the technology.

“Micro-engineered battery architectures, combined with high energy material such as tin, offer exciting new battery features including high energy capacity and good cycle lives, which provide the ability to power practical devices,” stated William King, a professor of mechanical science and engineering, who is a co-author of this work.

Here’s a link to and a citation for the paper,

Holographic patterning of high-performance on-chip 3D lithium-ion microbatteries by Hailong Ning, James H. Pikul, Runyu Zhang, Xuejiao Li, Sheng Xu, Junjie Wang, John A. Rogers, William P. King, and Paul V. Braun. PNAS doi: 10.1073/pnas.1423889112

This paper is behind a paywall.

Measuring a singular spin of a biological molecule

I gather there are some Swiss scientists excited about obtaining experimental proof for room temperature detection of a  biological molecule’s spin. From a May 11, 2015 news item on Nanowerk (Note: A link has been removed),

Physicists of the University of Basel and the Swiss Nanoscience Institute were able to show for the first time that the nuclear spins of single molecules can be detected with the help of magnetic particles at room temperature.

In Nature Nanotechnology (“High-efficiency resonant amplification of weak magnetic fields for single spin magnetometry at room temperature”), the researchers describe a novel experimental setup with which the tiny magnetic fields of the nuclear spins of single biomolecules – undetectable so far – could be registered for the first time. The proposed concept would improve medical diagnostics as well as analyses of biological and chemical samples in a decisive step forward.

A May 11, 2015 University of Basel press release, which originated the news item, explains why the researchers are excited about a ‘room temperature’ approach to measuring a nuclear spin,

The measurement of nuclear spins is routine by now in medical diagnostics (MRI). However, the currently existing devices need billions of atoms for the analysis and thus are not useful for many small-scale applications. Over many decades, scientists worldwide have thus engaged in an intense search for alternative methods, which would improve the sensitivity of the measurement techniques.

With the help of various types of sensors (SQUID- and Hall-sensors) and with magnetic resonance force microscopes, it has become possible to detect spins of single electrons and achieve structural resolution at the nanoscale. However, the detection of single nuclear spins of complex biological samples – the holy grail in the field – has not been possible so far.

Diamond crystals with tiny defects

The researchers from Basel now investigate the application of sensors made out of diamonds that host tiny defects in their crystal structure. In the crystal lattice of the diamond a Carbon atom is replaced by a Nitrogen atom, with a vacant site next to it. These so-called Nitrogen-Vacancy (NV) centers generate spins, which are ideally suited for detection of magnetic fields. At room temperature, researchers have shown experimentally in many labs before that with such NV centers resolution of single molecules is possible. However, this requires atomistically close distances between sensor and sample, which is not possible for biological material.

A tiny ferromagnetic particle, placed between sample and NV center, can solve this problem. Indeed, if the nuclear spin of the sample is driven at a specific resonance frequency, the resonance of the ferromagnetic particle changes. With the help of an NV center that is in close proximity of the magnetic particle, the scientists can then detect this modified resonance.

Measuring technology breakthrough?

The theoretical analysis and experimental techniques of the researchers in the teams of Prof. Daniel Loss and Prof. Patrick Maletinsky have shown that the use of such ferromagnetic particles can lead to a ten-thousand-fold amplification of the magnetic field of nuclear spins. „I am confident that our concept will soon be implemented in real systems and will lead to a breakthrough in metrology“ [science of measurement], comments Daniel Loss the recent publication, where the first author Dr. Luka Trifunovic, postdoc in the Loss team, made essential contributions and which was performed in collaboration with colleagues from the JARA Institute for Quantum Information (Aachen, Deutschland) and the Harvard University (Cambridge, USA).

Here’s a link to and a citation for the paper,

High-efficiency resonant amplification of weak magnetic fields for single spin magnetometry at room temperature by  Luka Trifunovic, Fabio L. Pedrocchi, Silas Hoffman, Patrick Maletinsky, Amir Yacoby, & Daniel Loss. Nature Nanotechnology (2015) doi:10.1038/nnano.2015.74 Published online 11 May 2015

This paper is behind a paywall.

The Russians diagnose graphene’s quality and spatial imaging reactivity

Most of the marvelous things scientists talk about with regard to graphene require a relatively defect-free (perfect) material, from a May 8, 2015 news item on ScienceDaily,

Graphene and related 2D materials are anticipated to become the compounds of the century. It is not surprising — graphene is extremely thin and strong, as well as possesses outstanding electrical and thermal characteristics. The impact of material with such unique properties may be really impressive. Scientists [foresee] the imminent appearance of novel biomedical applications, new generation of smart materials, highly efficient light conversion and photocatalysis reinforced by graphene. However, the stumbling block is that many unique properties and capabilities are related to only perfect graphene with controlled number of defects. [emphasis mine] However, in reality ideal defect-free graphene surface is difficult to prepare and defects may have various sizes and shapes. In addition, dynamic behaviour and fluctuations make the defects difficult to locate. The process of scanning of large areas of graphene sheets in order to find out defect locations and to estimate the quality of the material is a time-consuming task. Let alone a lack of simple direct methods to capture and visualize defects on the carbon surface.

A May 8, 2014 Institute of Organic Chemisty (Russian Academy of Sciences) news release on EurekAlert, which originated the news item, offers more detail about the new technique for determining graphene quality and imaging carbon reactivity centres,

[A] [j]oint research project carried out by Ananikov and co-workers revealed [a] specific contrast agent — soluble palladium complex — that selectively attaches to defect areas on the surface of carbon materials. Pd attachment leads to formation of nanoparti[cl]es, which can be easily detected using a routine electron microscope. The more reactive the carbon center is, the stronger is the binding of contrast agent in the imaging procedure. Thus, reactivity centers and defect sites on a carbon surface were mapped in three-dimensional space with high resolution and excellent contrast using a handy nanoscale imaging procedure. The developed procedure distinguished carbon defects not only due to difference in their morphology, but also due to varying chemical reactivity. Therefore, this imaging approach enables the chemical reactivity to be visualized with spatial resolution.

Mapping carbon reactivity centers with “Pd markers” gave unique insight into the reactivity of the graphene layers. As revealed in the study, more than 2000 reactive centers can be located per 1 μm2 of the surface area of regular carbon material. The study pointed out the spatial complexity of the carbon material at the nanoscale. Mapping of surface defect density showed substantial gradients and variations across the surface area, which can possess a kind of organized structures of defects.

Medical application of imaging (tomography) for diagnostics, including the usage of contrast agents for more accuracy and easier observation, has proven its utility for many years. The present study highlights a new possibility in tomography applications to run diagnostics of materials at atomic scale.

Here’s a link to and a citation for the paper,

Spatial imaging of carbon reactivity centers in Pd/C catalytic systems by E. O. Pentsak, A. S. Kashin, M. V. Polynski, K. O. Kvashnina, P. Glatzel, and V. P. Ananikov.  Chem. Sci., 2015, Advance Article DOI: 10.1039/C5SC00802F
First published online 08 May 2015

This paper is open access.

Magnetospinning with an inexpensive magnet

The fridge magnet mentioned in the headline for a May 11, 2015  Nanowerk spotlight aricle by Michael Berger isn’t followed up until the penultimate paragraph but it is worth the wait,

“Our method for spinning of continuous micro- and nanofibers uses a permanent revolving magnet,” Alexander Tokarev, Ph.D., a Research Associate in the Nanostructured Materials Laboratory at the University of Georgia, tells Nanowerk. “This fabrication technique utilizes magnetic forces and hydrodynamic features of stretched threads to produce fine nanofibers.”

“The new method provides excellent control over the fiber diameter and is compatible with a range of polymeric materials and polymer composite materials including biopolymers,” notes Tokarev. “Our research showcases this new technique and demonstrates its advantages to the scientific community.”

Electrospinning is the most popular method to produce nanofibers in labs now. Owing to its simplicity and low costs, a magnetospinning set-up could be installed in any non-specialized laboratory for broader use of magnetospun nanofibers in different methods and technologies. The total cost of a laboratory electrospinning system is above $10,000. In contrast, no special equipment is needed for magnetospinning. It is possible to build a magnetospinning set-up, such as the University of Georgia team utilizes, by just using a $30 rotating motor and a $5 permanent magnet. [emphasis mine]

Berger’s article references a recent paper published by the team,

Magnetospinning of Nano- and Microfibers by Alexander Tokarev, Oleksandr Trotsenko, Ian M. Griffiths, Howard A. Stone, and Sergiy Minko. Advanced Materials First published: 8 May 2015Full publication history DOI: 10.1002/adma.201500374View/save citation

This paper is behind a paywall.

* The headline originally stated that a ‘fridge’ magnet was used. Researcher Alexander Tokarev kindly dropped by correct this misunderstanding on my part and the headline has been changed to read  ‘inexpensive magnet’ on May 14, 2015 at approximately 1400 hundred hours PDT.

CRISPR genome editing tools and human genetic engineering issues

This post is going to feature a human genetic engineering roundup of sorts.

First, the field of human genetic engineering encompasses more than the human genome as this paper (open access until June 5, 2015) notes in the context of a discussion about a specific CRISPR gene editing tool,

CRISPR-Cas9 Based Genome Engineering: Opportunities in Agri-Food-Nutrition and Healthcare by Rajendran Subin Raj Cheri Kunnumal, Yau Yuan-Yeu, Pandey Dinesh, and Kumar Anil. OMICS: A Journal of Integrative Biology. May 2015, 19(5): 261-275. doi:10.1089/omi.2015.0023 Published Online Ahead of Print: April 14, 2015

Here’s more about the paper from a May 7, 2015 Mary Ann Liebert publisher news release on EurekAlert,

Researchers have customized and refined a technique derived from the immune system of bacteria to develop the CRISPR-Cas9 genome engineering system, which enables targeted modifications to the genes of virtually any organism. The discovery and development of CRISPR-Cas9 technology, its wide range of potential applications in the agriculture/food industry and in modern medicine, and emerging regulatory issues are explored in a Review article published in OMICS: A Journal of Integrative Biology, …

“CRISPR-Cas9 Based Genome Engineering: Opportunities in Agri-Food-Nutrition and Healthcare” provides a detailed description of the CRISPR system and its applications in post-genomics biology. Subin Raj, Cheri Kunnumal Rajendran, Dinish Pandey, and Anil Kumar, G.B. Pant University of Agriculture and Technology (Uttarakhand, India) and Yuan-Yeu Yau, Northeastern State University (Broken Arrow, OK) describe the advantages of the RNA-guided Cas9 endonuclease-based technology, including the activity, specificity, and target range of the enzyme. The authors discuss the rapidly expanding uses of the CRISPR system in both basic biological research and product development, such as for crop improvement and the discovery of novel therapeutic agents. The regulatory implications of applying CRISPR-based genome editing to agricultural products is an evolving issue awaiting guidance by international regulatory agencies.

“CRISPR-Cas9 technology has triggered a revolution in genome engineering within living systems,” says OMICS Editor-in-Chief Vural Özdemir, MD, PhD, DABCP. “This article explains the varied applications and potentials of this technology from agriculture to nutrition to medicine.

Intellectual property (patents)

The CRISPR technology has spawned a number of intellectual property (patent) issues as a Dec. 21,2014 post by Glyn Moody on Techdirt stated,

Although not many outside the world of the biological sciences have heard of it yet, the CRISPR gene editing technique may turn out to be one of the most important discoveries of recent years — if patent battles don’t ruin it. Technology Review describes it as:

… an invention that may be the most important new genetic engineering technique since the beginning of the biotechnology age in the 1970s. The CRISPR system, dubbed a “search and replace function” for DNA, lets scientists easily disable genes or change their function by replacing DNA letters. During the last few months, scientists have shown that it’s possible to use CRISPR to rid mice of muscular dystrophy, cure them of a rare liver disease, make human cells immune to HIV, and genetically modify monkeys.

Unfortunately, rivalry between scientists claiming the credit for key parts of CRISPR threatens to spill over into patent litigation:

[A researcher at the MIT-Harvard Broad Institute, Feng] Zhang cofounded Editas Medicine, and this week the startup announced that it had licensed his patent from the Broad Institute. But Editas doesn’t have CRISPR sewn up. That’s because [Jennifer] Doudna, a structural biologist at the University of California, Berkeley, was a cofounder of Editas, too. And since Zhang’s patent came out, she’s broken off with the company, and her intellectual property — in the form of her own pending patent — has been licensed to Intellia, a competing startup unveiled only last month. Making matters still more complicated, [another CRISPR researcher, Emmanuelle] Charpentier sold her own rights in the same patent application to CRISPR Therapeutics.

Things are moving quickly on the patent front, not least because the Broad Institute paid extra to speed up its application, conscious of the high stakes at play here:

Along with the patent came more than 1,000 pages of documents. According to Zhang, Doudna’s predictions in her own earlier patent application that her discovery would work in humans was “mere conjecture” and that, instead, he was the first to show it, in a separate and “surprising” act of invention.

The patent documents have caused consternation. The scientific literature shows that several scientists managed to get CRISPR to work in human cells. In fact, its easy reproducibility in different organisms is the technology’s most exciting hallmark. That would suggest that, in patent terms, it was “obvious” that CRISPR would work in human cells, and that Zhang’s invention might not be worthy of its own patent.

….

Ethical and moral issues

The CRISPR technology has reignited a discussion about ethical and moral issues of human genetic engineering some of which is reviewed in an April 7, 2015 posting about a moratorium by Sheila Jasanoff, J. Benjamin Hurlbut and Krishanu Saha for the Guardian science blogs (Note: A link has been removed),

On April 3, 2015, a group of prominent biologists and ethicists writing in Science called for a moratorium on germline gene engineering; modifications to the human genome that will be passed on to future generations. The moratorium would apply to a technology called CRISPR/Cas9, which enables the removal of undesirable genes, insertion of desirable ones, and the broad recoding of nearly any DNA sequence.

Such modifications could affect every cell in an adult human being, including germ cells, and therefore be passed down through the generations. Many organisms across the range of biological complexity have already been edited in this way to generate designer bacteria, plants and primates. There is little reason to believe the same could not be done with human eggs, sperm and embryos. Now that the technology to engineer human germlines is here, the advocates for a moratorium declared, it is time to chart a prudent path forward. They recommend four actions: a hold on clinical applications; creation of expert forums; transparent research; and a globally representative group to recommend policy approaches.

The authors go on to review precedents and reasons for the moratorium while suggesting we need better ways for citizens to engage with and debate these issues,

An effective moratorium must be grounded in the principle that the power to modify the human genome demands serious engagement not only from scientists and ethicists but from all citizens. We need a more complex architecture for public deliberation, built on the recognition that we, as citizens, have a duty to participate in shaping our biotechnological futures, just as governments have a duty to empower us to participate in that process. Decisions such as whether or not to edit human genes should not be left to elite and invisible experts, whether in universities, ad hoc commissions, or parliamentary advisory committees. Nor should public deliberation be temporally limited by the span of a moratorium or narrowed to topics that experts deem reasonable to debate.

I recommend reading the post in its entirety as there are nuances that are best appreciated in the entirety of the piece.

Shortly after this essay was published, Chinese scientists announced they had genetically modified (nonviable) human embryos. From an April 22, 2015 article by David Cyranoski and Sara Reardon in Nature where the research and some of the ethical issues discussed,

In a world first, Chinese scientists have reported editing the genomes of human embryos. The results are published1 in the online journal Protein & Cell and confirm widespread rumours that such experiments had been conducted — rumours that sparked a high-profile debate last month2, 3 about the ethical implications of such work.

In the paper, researchers led by Junjiu Huang, a gene-function researcher at Sun Yat-sen University in Guangzhou, tried to head off such concerns by using ‘non-viable’ embryos, which cannot result in a live birth, that were obtained from local fertility clinics. The team attempted to modify the gene responsible for β-thalassaemia, a potentially fatal blood disorder, using a gene-editing technique known as CRISPR/Cas9. The researchers say that their results reveal serious obstacles to using the method in medical applications.

“I believe this is the first report of CRISPR/Cas9 applied to human pre-implantation embryos and as such the study is a landmark, as well as a cautionary tale,” says George Daley, a stem-cell biologist at Harvard Medical School in Boston, Massachusetts. “Their study should be a stern warning to any practitioner who thinks the technology is ready for testing to eradicate disease genes.”

….

Huang says that the paper was rejected by Nature and Science, in part because of ethical objections; both journals declined to comment on the claim. (Nature’s news team is editorially independent of its research editorial team.)

He adds that critics of the paper have noted that the low efficiencies and high number of off-target mutations could be specific to the abnormal embryos used in the study. Huang acknowledges the critique, but because there are no examples of gene editing in normal embryos he says that there is no way to know if the technique operates differently in them.

Still, he maintains that the embryos allow for a more meaningful model — and one closer to a normal human embryo — than an animal model or one using adult human cells. “We wanted to show our data to the world so people know what really happened with this model, rather than just talking about what would happen without data,” he says.

This, too, is a good and thoughtful read.

There was an official response in the US to the publication of this research, from an April 29, 2015 post by David Bruggeman on his Pasco Phronesis blog (Note: Links have been removed),

In light of Chinese researchers reporting their efforts to edit the genes of ‘non-viable’ human embryos, the National Institutes of Health (NIH) Director Francis Collins issued a statement (H/T Carl Zimmer).

“NIH will not fund any use of gene-editing technologies in human embryos. The concept of altering the human germline in embryos for clinical purposes has been debated over many years from many different perspectives, and has been viewed almost universally as a line that should not be crossed. Advances in technology have given us an elegant new way of carrying out genome editing, but the strong arguments against engaging in this activity remain. These include the serious and unquantifiable safety issues, ethical issues presented by altering the germline in a way that affects the next generation without their consent, and a current lack of compelling medical applications justifying the use of CRISPR/Cas9 in embryos.” …

More than CRISPR

As well, following on the April 22, 2015 Nature article about the controversial research, the Guardian published an April 26, 2015 post by Filippa Lentzos, Koos van der Bruggen and Kathryn Nixdorff which makes the case that CRISPR techniques do not comprise the only worrisome genetic engineering technology,

The genome-editing technique CRISPR-Cas9 is the latest in a series of technologies to hit the headlines. This week Chinese scientists used the technology to genetically modify human embryos – the news coming less than a month after a prominent group of scientists had called for a moratorium on the technology. The use of ‘gene drives’ to alter the genetic composition of whole populations of insects and other life forms has also raised significant concern.

But the technology posing the greatest, most immediate threat to humanity comes from ‘gain-of-function’ (GOF) experiments. This technology adds new properties to biological agents such as viruses, allowing them to jump to new species or making them more transmissible. While these are not new concepts, there is grave concern about a subset of experiments on influenza and SARS viruses which could metamorphose them into pandemic pathogens with catastrophic potential.

In October 2014 the US government stepped in, imposing a federal funding pause on the most dangerous GOF experiments and announcing a year-long deliberative process. Yet, this process has not been without its teething-problems. Foremost is the de facto lack of transparency and open discussion. Genuine engagement is essential in the GOF debate where the stakes for public health and safety are unusually high, and the benefits seem marginal at best, or non-existent at worst. …

Particularly worrisome about the GOF process is that it is exceedingly US-centric and lacks engagement with the international community. Microbes know no borders. The rest of the world has a huge stake in the regulation and oversight of GOF experiments.

Canadian perspective?

I became somewhat curious about the Canadian perspective on all this genome engineering discussion and found a focus on agricultural issues in the single Canadian blog piece I found. It’s an April 30, 2015 posting by Lisa Willemse on Genome Alberta’s Livestock blog has a twist in the final paragraph,

The spectre of undesirable inherited traits as a result of DNA disruption via genome editing in human germline has placed the technique – and the ethical debate – on the front page of newspapers around the globe. Calls for a moratorium on further research until both the ethical implications can be worked out and the procedure better refined and understood, will undoubtedly temper research activities in many labs for months and years to come.

On the surface, it’s hard to see how any of this will advance similar research in livestock or crops – at least initially.

Groups already wary of so-called “frankenfoods” may step up efforts to prevent genome-edited food products from hitting supermarket shelves. In the EU, where a stringent ban on genetically-modified (GM) foods is already in place, there are concerns that genome-edited foods will be captured under this rubric, holding back many perceived benefits. This includes pork and beef from animals with disease resistance, lower methane emissions and improved feed-to-food ratios, milk from higher-yield or hornless cattle, as well as food and feed crops with better, higher quality yields or weed resistance.

Still, at the heart of the human germline editing is the notion of a permanent genetic change that can be passed on to offspring, leading to concerns of designer babies and other advantages afforded only to those who can pay. This is far less of a concern in genome-editing involving crops and livestock, where the overriding aim is to increase food supply for the world’s population at lower cost. Given this, and that research for human medical benefits has always relied on safety testing and data accumulation through experimentation in non-human animals, it’s more likely that any moratorium in human studies will place increased pressure to demonstrate long-term safety of such techniques on those who are conducting the work in other species.

Willemse’s last paragraph offers a strong contrast to the Guardian and Nature pieces.

Finally, there’s a May 8, 2015 posting (which seems to be an automat4d summary of an article in the New Scientist) on a blog maintained by the Canadian Raelian Movement. These are people who believe that alien scientists landed on earth and created all the forms of life on this planet. You can find  more on their About page. In case it needs to be said, I do not subscribe to this belief system but I do find it interesting in and of itself and because one of the few Canadian sites that I could find offering an opinion on the matter even if it is in the form of a borrowed piece from the New Scientist.

Animal-based (some of it ‘fishy’) sunscreen from Oregon State University

In the Northern Hemisphere countries it’s time to consider one’s sunscreen options.While this Oregon State University into animal-based sunscreens is intriguing,  market-ready options likely won’t be available for quite some time. (There is a second piece of related research, more ‘fishy’ in nature [pun], featured later in this post.) From a May 12, 2015 Oregon State University news release,

Researchers have discovered why many animal species can spend their whole lives outdoors with no apparent concern about high levels of solar exposure: they make their own sunscreen.

The findings, published today in the journal eLife by scientists from Oregon State University, found that many fish, amphibians, reptiles, and birds can naturally produce a compound called gadusol, which among other biologic activities provides protection from the ultraviolet, or sun-burning component of sunlight.

The researchers also believe that this ability may have been obtained through some prehistoric, natural genetic engineering.

Here’s an amusing image to illustrate the researchers’ point,

Gadusol is the gene found in some animals which gives natural sun protection. Courtesy: Oregon State University

Gadusol is the gene found in some animals which gives natural sun protection.
Courtesy: Oregon State University

The news release goes on to describe gadusol and its believed evolutionary pathway,

The gene that provides the capability to produce gadusol is remarkably similar to one found in algae, which may have transferred it to vertebrate animals – and because it’s so valuable, it’s been retained and passed along for hundreds of millions of years of animal evolution.

“Humans and mammals don’t have the ability to make this compound, but we’ve found that many other animal species do,” said Taifo Mahmud, a professor in the OSU College of Pharmacy, and lead author on the research.

The genetic pathway that allows gadusol production is found in animals ranging from rainbow trout to the American alligator, green sea turtle and a farmyard chicken.

“The ability to make gadusol, which was first discovered in fish eggs, clearly has some evolutionary value to be found in so many species,” Mahmud said. “We know it provides UV-B protection, it makes a pretty good sunscreen. But there may also be roles it plays as an antioxidant, in stress response, embryonic development and other functions.”

In their study, the OSU researchers also found a way to naturally produce gadusol in high volumes using yeast. With continued research, it may be possible to develop gadusol as an ingredient for different types of sunscreen products, cosmetics or pharmaceutical products for humans.

A conceptual possibility, Mahmud said, is that ingestion of gadusol could provide humans a systemic sunscreen, as opposed to a cream or compound that has to be rubbed onto the skin.

The existence of gadusol had been known of in some bacteria, algae and other life forms, but it was believed that vertebrate animals could only obtain it from their diet. The ability to directly synthesize what is essentially a sunscreen may play an important role in animal evolution, and more work is needed to understand the importance of this compound in animal physiology and ecology, the researchers said.

Here’s a link to and a citation for the paper,

De novo synthesis of a sunscreen compound in vertebrates by Andrew R Osborn, Khaled H Almabruk, Garrett Holzwarth, Shumpei Asamizu, Jane LaDu, Kelsey M Kean, P Andrew Karplus, Robert L Tanguay, Alan T Bakalinsky, and Taifo Mahmud. eLife 2015;4:e05919 DOI: http://dx.doi.org/10.7554/eLife.05919 Published May 12, 2015

This is an open access paper.

The second piece of related research, also published yesterday on May 12, 2015, comes from a pair of scientists at Harvard University. From a May 12, 2015  eLife news release on EurekAlert,

Scientists from Oregon State University [two authors are listed for the ‘zebrafish’ paper and both are from Harvard University] have discovered that fish can produce their own sunscreen. They have copied the method used by fish for potential use in humans.

In the study published in the journal eLife, scientists found that zebrafish are able to produce a chemical called gadusol that protects against UV radiation. They successfully reproduced the method that zebrafish use by expressing the relevant genes in yeast. The findings open the door to large-scale production of gadusol for sunscreen and as an antioxidant in pharmaceuticals.

Gadusol was originally identified in cod roe and has since been discovered in the eyes of the mantis shrimp, sea urchin eggs, sponges, and in the dormant eggs and newly hatched larvae of brine shrimps. It was previously thought that fish can only acquire the chemical through their diet or through a symbiotic relationship with bacteria.

Marine organisms in the upper ocean and on reefs are subject to intense and often unrelenting sunlight. Gadusol and related compounds are of great scientific interest for their ability to protect against DNA damage from UV rays. There is evidence that amphibians, reptiles, and birds can also produce gadusol, while the genetic machinery is lacking in humans and other mammals.

The team were investigating compounds similar to gadusol that are used to treat diabetes and fungal infections. It was believed that the biosynthetic enzyme common to all of them, EEVS, was only present in bacteria. The scientists were surprised to discover that fish and other vertebrates contain similar genes to those that code for EEVS.

Curious about their function in animals, they expressed the zebrafish gene in E. coli and analysis suggested that fish combine EEVS with another protein, whose production may be induced by light, to produce gadusol. To check that this combination is really sufficient, the scientists transferred the genes to yeast and set them to work to see what they would create. This confirmed the production of gadusol. Its successful production in yeast provides a viable route to commercialisation.

As well as providing UV protection, gadusol may also play a role in stress responses, in embryonic development, and as an antioxidant.

Here’s a link to and a citation for the second paper from this loosely affiliated team of Oregon State University and Harvard University researchers,

Biochemistry: Shedding light on sunscreen biosynthesis in zebrafish by Carolyn A Brotherton and Emily P Balskus. eLife 2015;4:e07961 DOI: http://dx.doi.org/10.7554/eLife.07961 Published May 12, 2015

This paper, too, is open access.

One final bit and this is about the journal, eLife, from their news release on EurekAlert,

About eLife

eLife is a unique collaboration between the funders and practitioners of research to improve the way important research is selected, presented, and shared. eLife publishes outstanding works across the life sciences and biomedicine — from basic biological research to applied, translational, and clinical studies. eLife is supported by the Howard Hughes Medical Institute, the Max Planck Society, and the Wellcome Trust. Learn more at elifesciences.org.

It seems this journal is a joint, US (Howard Hughes Medical Institute), German (Max Planck Society), UK (Wellcome Trust) effort.

Sealing graphene’s defects to make a better filtration device

Making a graphene filter that allows water to pass through while screening out salt and/or noxious materials has been more challenging than one might think. According to a May 7, 2015 news item on Nanowerk, graphene filters can be ‘leaky’,

For faster, longer-lasting water filters, some scientists are looking to graphene –thin, strong sheets of carbon — to serve as ultrathin membranes, filtering out contaminants to quickly purify high volumes of water.

Graphene’s unique properties make it a potentially ideal membrane for water filtration or desalination. But there’s been one main drawback to its wider use: Making membranes in one-atom-thick layers of graphene is a meticulous process that can tear the thin material — creating defects through which contaminants can leak.

Now engineers at MIT [Massachusetts Institute of Technology], Oak Ridge National Laboratory, and King Fahd University of Petroleum and Minerals (KFUPM) have devised a process to repair these leaks, filling cracks and plugging holes using a combination of chemical deposition and polymerization techniques. The team then used a process it developed previously to create tiny, uniform pores in the material, small enough to allow only water to pass through.

A May 8, 2015 MIT news release (also on EurkeAlert), which originated the news item, expands on the theme,

Combining these two techniques, the researchers were able to engineer a relatively large defect-free graphene membrane — about the size of a penny. The membrane’s size is significant: To be exploited as a filtration membrane, graphene would have to be manufactured at a scale of centimeters, or larger.

In experiments, the researchers pumped water through a graphene membrane treated with both defect-sealing and pore-producing processes, and found that water flowed through at rates comparable to current desalination membranes. The graphene was able to filter out most large-molecule contaminants, such as magnesium sulfate and dextran.

Rohit Karnik, an associate professor of mechanical engineering at MIT, says the group’s results, published in the journal Nano Letters, represent the first success in plugging graphene’s leaks.

“We’ve been able to seal defects, at least on the lab scale, to realize molecular filtration across a macroscopic area of graphene, which has not been possible before,” Karnik says. “If we have better process control, maybe in the future we don’t even need defect sealing. But I think it’s very unlikely that we’ll ever have perfect graphene — there will always be some need to control leakages. These two [techniques] are examples which enable filtration.”

Sean O’Hern, a former graduate research assistant at MIT, is the paper’s first author. Other contributors include MIT graduate student Doojoon Jang, former graduate student Suman Bose, and Professor Jing Kong.

A delicate transfer

“The current types of membranes that can produce freshwater from saltwater are fairly thick, on the order of 200 nanometers,” O’Hern says. “The benefit of a graphene membrane is, instead of being hundreds of nanometers thick, we’re on the order of three angstroms — 600 times thinner than existing membranes. This enables you to have a higher flow rate over the same area.”

O’Hern and Karnik have been investigating graphene’s potential as a filtration membrane for the past several years. In 2009, the group began fabricating membranes from graphene grown on copper — a metal that supports the growth of graphene across relatively large areas. However, copper is impermeable, requiring the group to transfer the graphene to a porous substrate following fabrication.

However, O’Hern noticed that this transfer process would create tears in graphene. What’s more, he observed intrinsic defects created during the growth process, resulting perhaps from impurities in the original material.

Plugging graphene’s leaks

To plug graphene’s leaks, the team came up with a technique to first tackle the smaller intrinsic defects, then the larger transfer-induced defects. For the intrinsic defects, the researchers used a process called “atomic layer deposition,” placing the graphene membrane in a vacuum chamber, then pulsing in a hafnium-containing chemical that does not normally interact with graphene. However, if the chemical comes in contact with a small opening in graphene, it will tend to stick to that opening, attracted by the area’s higher surface energy.

The team applied several rounds of atomic layer deposition, finding that the deposited hafnium oxide successfully filled in graphene’s nanometer-scale intrinsic defects. However, O’Hern realized that using the same process to fill in much larger holes and tears — on the order of hundreds of nanometers — would require too much time.

Instead, he and his colleagues came up with a second technique to fill in larger defects, using a process called “interfacial polymerization” that is often employed in membrane synthesis. After they filled in graphene’s intrinsic defects, the researchers submerged the membrane at the interface of two solutions: a water bath and an organic solvent that, like oil, does not mix with water.

In the two solutions, the researchers dissolved two different molecules that can react to form nylon. Once O’Hern placed the graphene membrane at the interface of the two solutions, he observed that nylon plugs formed only in tears and holes — regions where the two molecules could come in contact because of tears in the otherwise impermeable graphene — effectively sealing the remaining defects.

Using a technique they developed last year, the researchers then etched tiny, uniform holes in graphene — small enough to let water molecules through, but not larger contaminants. In experiments, the group tested the membrane with water containing several different molecules, including salt, and found that the membrane rejected up to 90 percent of larger molecules. However, it let salt through at a faster rate than water.

The preliminary tests suggest that graphene may be a viable alternative to existing filtration membranes, although Karnik says techniques to seal its defects and control its permeability will need further improvements.

“Water desalination and nanofiltration are big applications where, if things work out and this technology withstands the different demands of real-world tests, it would have a large impact,” Karnik says. “But one could also imagine applications for fine chemical- or biological-sample processing, where these membranes could be useful. And this is the first report of a centimeter-scale graphene membrane that does any kind of molecular filtration. That’s exciting.”

De-en Jiang, an assistant professor of chemistry at the University of California at Riverside, sees the defect-sealing technique as “a great advance toward making graphene filtration a reality.”

“The two-step technique is very smart: sealing the defects while preserving the desired pores for filtration,” says Jiang, who did not contribute to the research. “This would make the scale-up much easier. One can produce a large graphene membrane first, not worrying about the defects, which can be sealed later.”

I have featured graphene and water desalination work before  from these researchers at MIT in a Feb. 27, 2014 posting. Interestingly, there was no mention of problems with defects in the news release highlighting this previous work.

Here’s a link to and a citation for the latest paper,

Nanofiltration across Defect-Sealed Nanoporous Monolayer Graphene by Sean C. O’Hern, Doojoon Jang, Suman Bose, Juan-Carlos Idrobo, Yi Song §, Tahar Laoui, Jing Kong, and Rohit Karnik. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.5b00456 Publication Date (Web): April 27, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.