Tag Archives: NSF

Announcing the ‘memtransistor’

Yet another advance toward ‘brainlike’ computing (how many times have I written this or a variation thereof in the last 10 years? See: Dexter Johnson’s take on the situation at the end of this post): Northwestern University announced their latest memristor research in a February 21, 2018 news item on Nanowerk,

Computer algorithms might be performing brain-like functions, such as facial recognition and language translation, but the computers themselves have yet to operate like brains.

“Computers have separate processing and memory storage units, whereas the brain uses neurons to perform both functions,” said Northwestern University’s Mark C. Hersam. “Neural networks can achieve complicated computation with significantly lower energy consumption compared to a digital computer.”

A February 21, 2018 Northwestern University news release (also on EurekAlert), which originated the news item, provides more information about the latest work from this team,

In recent years, researchers have searched for ways to make computers more neuromorphic, or brain-like, in order to perform increasingly complicated tasks with high efficiency. Now Hersam, a Walter P. Murphy Professor of Materials Science and Engineering in Northwestern’s McCormick School of Engineering, and his team are bringing the world closer to realizing this goal.

The research team has developed a novel device called a “memtransistor,” which operates much like a neuron by performing both memory and information processing. With combined characteristics of a memristor and transistor, the memtransistor also encompasses multiple terminals that operate more similarly to a neural network.

Supported by the National Institute of Standards and Technology and the National Science Foundation, the research was published online today, February 22 [2018], in Nature. Vinod K. Sangwan and Hong-Sub Lee, postdoctoral fellows advised by Hersam, served as the paper’s co-first authors.

The memtransistor builds upon work published in 2015, in which Hersam, Sangwan, and their collaborators used single-layer molybdenum disulfide (MoS2) to create a three-terminal, gate-tunable memristor for fast, reliable digital memory storage. Memristor, which is short for “memory resistors,” are resistors in a current that “remember” the voltage previously applied to them. Typical memristors are two-terminal electronic devices, which can only control one voltage channel. By transforming it into a three-terminal device, Hersam paved the way for memristors to be used in more complex electronic circuits and systems, such as neuromorphic computing.

To develop the memtransistor, Hersam’s team again used atomically thin MoS2 with well-defined grain boundaries, which influence the flow of current. Similar to the way fibers are arranged in wood, atoms are arranged into ordered domains – called “grains” – within a material. When a large voltage is applied, the grain boundaries facilitate atomic motion, causing a change in resistance.

“Because molybdenum disulfide is atomically thin, it is easily influenced by applied electric fields,” Hersam explained. “This property allows us to make a transistor. The memristor characteristics come from the fact that the defects in the material are relatively mobile, especially in the presence of grain boundaries.”

But unlike his previous memristor, which used individual, small flakes of MoS2, Hersam’s memtransistor makes use of a continuous film of polycrystalline MoS2 that comprises a large number of smaller flakes. This enabled the research team to scale up the device from one flake to many devices across an entire wafer.

“When length of the device is larger than the individual grain size, you are guaranteed to have grain boundaries in every device across the wafer,” Hersam said. “Thus, we see reproducible, gate-tunable memristive responses across large arrays of devices.”

After fabricating memtransistors uniformly across an entire wafer, Hersam’s team added additional electrical contacts. Typical transistors and Hersam’s previously developed memristor each have three terminals. In their new paper, however, the team realized a seven-terminal device, in which one terminal controls the current among the other six terminals.

“This is even more similar to neurons in the brain,” Hersam said, “because in the brain, we don’t usually have one neuron connected to only one other neuron. Instead, one neuron is connected to multiple other neurons to form a network. Our device structure allows multiple contacts, which is similar to the multiple synapses in neurons.”

Next, Hersam and his team are working to make the memtransistor faster and smaller. Hersam also plans to continue scaling up the device for manufacturing purposes.

“We believe that the memtransistor can be a foundational circuit element for new forms of neuromorphic computing,” he said. “However, making dozens of devices, as we have done in our paper, is different than making a billion, which is done with conventional transistor technology today. Thus far, we do not see any fundamental barriers that will prevent further scale up of our approach.”

The researchers have made this illustration available,

Caption: This is the memtransistor symbol overlaid on an artistic rendering of a hypothetical circuit layout in the shape of a brain. Credit; Hersam Research Group

Here’s a link to and a citation for the paper,

Multi-terminal memtransistors from polycrystalline monolayer molybdenum disulfide by Vinod K. Sangwan, Hong-Sub Lee, Hadallia Bergeron, Itamar Balla, Megan E. Beck, Kan-Sheng Chen, & Mark C. Hersam. Nature volume 554, pages 500–504 (22 February 2018 doi:10.1038/nature25747 Published online: 21 February 2018

This paper is behind a paywall.

The team’s earlier work referenced in the news release was featured here in an April 10, 2015 posting.

Dexter Johnson

From a Feb. 23, 2018 posting by Dexter Johnson on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),

While this all seems promising, one of the big shortcomings in neuromorphic computing has been that it doesn’t mimic the brain in a very important way. In the brain, for every neuron there are a thousand synapses—the electrical signal sent between the neurons of the brain. This poses a problem because a transistor only has a single terminal, hardly an accommodating architecture for multiplying signals.

Now researchers at Northwestern University, led by Mark Hersam, have developed a new device that combines memristors—two-terminal non-volatile memory devices based on resistance switching—with transistors to create what Hersam and his colleagues have dubbed a “memtransistor” that performs both memory storage and information processing.

This most recent research builds on work that Hersam and his team conducted back in 2015 in which the researchers developed a three-terminal, gate-tunable memristor that operated like a kind of synapse.

While this work was recognized as mimicking the low-power computing of the human brain, critics didn’t really believe that it was acting like a neuron since it could only transmit a signal from one artificial neuron to another. This was far short of a human brain that is capable of making tens of thousands of such connections.

“Traditional memristors are two-terminal devices, whereas our memtransistors combine the non-volatility of a two-terminal memristor with the gate-tunability of a three-terminal transistor,” said Hersam to IEEE Spectrum. “Our device design accommodates additional terminals, which mimic the multiple synapses in neurons.”

Hersam believes that these unique attributes of these multi-terminal memtransistors are likely to present a range of new opportunities for non-volatile memory and neuromorphic computing.

If you have the time and the interest, Dexter’s post provides more context,

Santiago Ramón y Cajal and the butterflies of the soul

The Cajal exhibit of drawings was here in Vancouver (Canada) this last fall (2017) and I still carry the memory of that glorious experience (see my Sept. 11, 2017 posting for more about the show and associated events). It seems Cajal’s drawings had a similar response in New York city, from a January 18, 2018 article by Roberta Smith for the New York Times,

It’s not often that you look at an exhibition with the help of the very apparatus that is its subject. But so it is with “The Beautiful Brain: The Drawings of Santiago Ramón y Cajal” at the Grey Art Gallery at New York University, one of the most unusual, ravishing exhibitions of the season.

The show finished its run on March 31, 2018 and is now on its way to the Massachusetts Institute of Technology (MIT) in Boston, Massachusetts for its opening on May 3, 2018. It looks like they have an exciting lineup of events to go along with the exhibit (from MIT’s The Beautiful Brain: The Drawings of Santiago Ramón y Cajal exhibit and event page),

SUMMER PROGRAMS

ONGOING

Spotlight Tours
Explorations led by local and Spanish scientists, artists, and entrepreneurs who will share their unique perspectives on particular aspects of the exhibition. (2:00 pm on select Tuesdays and Saturdays)

Tue, May 8 – Mark Harnett, Fred and Carole Middleton Career Development Professor at MIT and McGovern Institute Investigator Sat, May 26 – Marion Boulicault, MIT Graduate Student and Neuroethics Fellow in the Center for Sensorimotor Neural Engineering Tue, June 5 – Kelsey Allen, Graduate researcher, MIT Center for Brains, Minds, and Machines Sat, Jun 23 – Francisco Martin-Martinez, Research Scientist in MIT’s Laboratory for Atomistic & Molecular Mechanics and President of the Spanish Foundation for Science and Technology Jul 21 – Alex Gomez-Marin, Principal Investigator of the Behavior of Organisms Laboratory in the Instituto de Neurociencias, Spain Tue, Jul 31– Julie Pryor, Director of Communications at the McGovern Institute for Brain Research at MIT Tue, Aug 28 – Satrajit Ghosh, Principal Research Scientist at the McGovern Institute for Brain Research at MIT, Assistant Professor in the Department of Otolaryngology at Harvard Medical School, and faculty member in the Speech and Hearing Biosciences and Technology program in the Harvard Division of Medical Sciences

Idea Hub
Drop in and explore expansion microscopy in our maker-space.

Visualizing Science Workshop
Experiential learning with micro-scale biological images. (pre-registration required)

Gallery Demonstrations
Researchers share the latest on neural anatomy, signal transmission, and modern imaging techniques.

EVENTS

Teen Science Café: Mindful Matters
MIT researchers studying the brain share their mind-blowing findings.

Neuron Paint Night
Create a painting of cerebral cortex neurons and learn about the EyeWire citizen science game.

Cerebral Cinema Series
Hear from researchers and then compare real science to depictions on the big screen.

Brainy Trivia
Test your brain power in a night of science trivia and short, snappy research talks.

Come back to see our exciting lineup for the fall!

If you don’t have a chance to see the show or if you’d like a preview, I encourage you to read Smith’s article as it has embedded several Cajal drawings and rendered them exceptionally well.

For those who like a little contemporary (and related) science with their art, there’s a March 30, 2018 Harvard Medical Schoo (HMS)l news release by Kevin Jang (also on EurekAlert), Note: All links save one have been removed,

Drawing of the cells of the chick cerebellum by Santiago Ramón y Cajal, from “Estructura de los centros nerviosos de las aves,” Madrid, circa 1905

 

Modern neuroscience, for all its complexity, can trace its roots directly to a series of pen-and-paper sketches rendered by Nobel laureate Santiago Ramón y Cajal in the late 19th and early 20th centuries.

His observations and drawings exposed the previously hidden composition of the brain, revealing neuronal cell bodies and delicate projections that connect individual neurons together into intricate networks.

As he explored the nervous systems of various organisms under his microscope, a natural question arose: What makes a human brain different from the brain of any other species?

At least part of the answer, Ramón y Cajal hypothesized, lay in a specific class of neuron—one found in a dazzling variety of shapes and patterns of connectivity, and present in higher proportions in the human brain than in the brains of other species. He dubbed them the “butterflies of the soul.”

Known as interneurons, these cells play critical roles in transmitting information between sensory and motor neurons, and, when defective, have been linked to diseases such as schizophrenia, autism and intellectual disability.

Despite more than a century of study, however, it remains unclear why interneurons are so diverse and what specific functions the different subtypes carry out.

Now, in a study published in the March 22 [2018] issue of Nature, researchers from Harvard Medical School, New York Genome Center, New York University and the Broad Institute of MIT and Harvard have detailed for the first time how interneurons emerge and diversify in the brain.

Using single-cell analysis—a technology that allows scientists to track cellular behavior one cell at a time—the team traced the lineage of interneurons from their earliest precursor states to their mature forms in mice. The researchers identified key genetic programs that determine the fate of developing interneurons, as well as when these programs are switched on or off.

The findings serve as a guide for efforts to shed light on interneuron function and may help inform new treatment strategies for disorders involving their dysfunction, the authors said.

“We knew more than 100 years ago that this huge diversity of morphologically interesting cells existed in the brain, but their specific individual roles in brain function are still largely unclear,” said co-senior author Gordon Fishell, HMS professor of neurobiology and a faculty member at the Stanley Center for Psychiatric Research at the Broad.

“Our study provides a road map for understanding how and when distinct interneuron subtypes develop, giving us unprecedented insight into the biology of these cells,” he said. “We can now investigate interneuron properties as they emerge, unlock how these important cells function and perhaps even intervene when they fail to develop correctly in neuropsychiatric disease.”

A hippocampal interneuron. Image: Biosciences Imaging Gp, Soton, Wellcome Trust via Creative CommonsA hippocampal interneuron. Image: Biosciences Imaging Gp, Soton, Wellcome Trust via Creative Commons

Origins and Fates

In collaboration with co-senior author Rahul Satija, core faculty member of the New York Genome Center, Fishell and colleagues analyzed brain regions in developing mice known to contain precursor cells that give rise to interneurons.

Using Drop-seq, a single-cell sequencing technique created by researchers at HMS and the Broad, the team profiled gene expression in thousands of individual cells at multiple time points.

This approach overcomes a major limitation in past research, which could analyze only the average activity of mixtures of many different cells.

In the current study, the team found that the precursor state of all interneurons had similar gene expression patterns despite originating in three separate brain regions and giving rise to 14 or more interneuron subtypes alone—a number still under debate as researchers learn more about these cells.

“Mature interneuron subtypes exhibit incredible diversity. Their morphology and patterns of connectivity and activity are so different from each other, but our results show that the first steps in their maturation are remarkably similar,” said Satija, who is also an assistant professor of biology at New York University.

“They share a common developmental trajectory at the earliest stages, but the seeds of what will cause them to diverge later—a handful of genes—are present from the beginning,” Satija said.

As they profiled cells at later stages in development, the team observed the initial emergence of four interneuron “cardinal” classes, which give rise to distinct fates. Cells were committed to these fates even in the early embryo. By developing a novel computational strategy to link precursors with adult subtypes, the researchers identified individual genes that were switched on and off when cells began to diversify.

For example, they found that the gene Mef2c—mutations of which are linked to Alzheimer’s disease, schizophrenia and neurodevelopmental disorders in humans—is an early embryonic marker for a specific interneuron subtype known as Pvalb neurons. When they deleted Mef2c in animal models, Pvalb neurons failed to develop.

These early genes likely orchestrate the execution of subsequent genetic subroutines, such as ones that guide interneuron subtypes as they migrate to different locations in the brain and ones that help form unique connection patterns with other neural cell types, the authors said.

The identification of these genes and their temporal activity now provide researchers with specific targets to investigate the precise functions of interneurons, as well as how neurons diversify in general, according to the authors.

“One of the goals of this project was to address an incredibly fascinating developmental biology question, which is how individual progenitor cells decide between different neuronal fates,” Satija said. “In addition to these early markers of interneuron divergence, we found numerous additional genes that increase in expression, many dramatically, at later time points.”

The association of some of these genes with neuropsychiatric diseases promises to provide a better understanding of these disorders and the development of therapeutic strategies to treat them, a particularly important notion given the paucity of new treatments, the authors said.

Over the past 50 years, there have been no fundamentally new classes of neuropsychiatric drugs, only newer versions of old drugs, the researchers pointed out.

“Our repertoire is no better than it was in the 1970s,” Fishell said.

“Neuropsychiatric diseases likely reflect the dysfunction of very specific cell types. Our study puts forward a clear picture of what cells to look at as we work to shed light on the mechanisms that underlie these disorders,” Fishell said. “What we will find remains to be seen, but we have new, strong hypotheses that we can now test.”

As a resource for the research community, the study data and software are open-source and freely accessible online.

A gallery of the drawings of Santiago Ramón y Cajal is currently on display in New York City, and will open at the MIT Museum in Boston in May 2018.

Christian Mayer, Christoph Hafemeister and Rachel Bandler served as co-lead authors on the study.

This work was supported by the National Institutes of Health (R01 NS074972, R01 NS081297, MH071679-12, DP2-HG-009623, F30MH114462, T32GM007308, F31NS103398), the European Molecular Biology Organization, the National Science Foundation and the Simons Foundation.

Here’s link to and a citation for the paper,

Developmental diversification of cortical inhibitory interneurons by Christian Mayer, Christoph Hafemeister, Rachel C. Bandler, Robert Machold, Renata Batista Brito, Xavier Jaglin, Kathryn Allaway, Andrew Butler, Gord Fishell, & Rahul Satija. Nature volume 555, pages 457–462 (22 March 2018) doi:10.1038/nature25999 Published: 05 March 2018

This paper is behind a paywall.

New path to viable memristor/neuristor?

I first stumbled onto memristors and the possibility of brain-like computing sometime in 2008 (around the time that R. Stanley Williams and his team at HP Labs first published the results of their research linking Dr. Leon Chua’s memristor theory to their attempts to shrink computer chips). In the almost 10 years since, scientists have worked hard to utilize memristors in the field of neuromorphic (brain-like) engineering/computing.

A January 22, 2018 news item on phys.org describes the latest work,

When it comes to processing power, the human brain just can’t be beat.

Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses—the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT [Massachusetts Institute of Technology] have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.

A January 22, 2018 MIT news release by Jennifer Chua (also on EurekAlert), which originated the news item, provides more detail about the research,

The design, published today [January 22, 2018] in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

The research was led by Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, and a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories. His co-authors are Shinhyun Choi (first author), Scott Tan (co-first author), Zefan Li, Yunjo Kim, Chanyeol Choi, and Hanwool Yeon of MIT, along with Pai-Yu Chen and Shimeng Yu of Arizona State University.

Too many paths

Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.

But it’s been difficult to control the flow of ions in existing designs. Kim says that’s because most switching mediums, made of amorphous materials, have unlimited possible paths through which ions can travel — a bit like Pachinko, a mechanical arcade game that funnels small steel balls down through a series of pins and levers, which act to either divert or direct the balls out of the machine.

Like Pachinko, existing switching mediums contain multiple paths that make it difficult to predict where ions will make it through. Kim says that can create unwanted nonuniformity in a synapse’s performance.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

A perfect mismatch

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium — a material also used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

“This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.

Writing, recognized

As a final test, Kim’s team explored how its device would perform if it were to carry out actual learning tasks — specifically, recognizing samples of handwriting, which researchers consider to be a first practical test for neuromorphic chips. Such chips would consist of “input/hidden/output neurons,” each connected to other “neurons” via filament-based artificial synapses.

Scientists believe such stacks of neural nets can be made to “learn.” For instance, when fed an input that is a handwritten ‘1,’ with an output that labels it as ‘1,’ certain output neurons will be activated by input neurons and weights from an artificial synapse. When more examples of handwritten ‘1s’ are fed into the same chip, the same output neurons may be activated when they sense similar features between different samples of the same letter, thus “learning” in a fashion similar to what the brain does.

Kim and his colleagues ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, the properties of which they based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from a handwritten recognition dataset commonly used by neuromorphic designers, and found that their neural network hardware recognized handwritten samples 95 percent of the time, compared to the 97 percent accuracy of existing software algorithms.

The team is in the process of fabricating a working neuromorphic chip that can carry out handwriting-recognition tasks, not in simulation but in reality. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that currently are only possible with large supercomputers.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial hardware.”

This research was supported in part by the National Science Foundation.

Here’s a link to and a citation for the paper,

SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations by Shinhyun Choi, Scott H. Tan, Zefan Li, Yunjo Kim, Chanyeol Choi, Pai-Yu Chen, Hanwool Yeon, Shimeng Yu, & Jeehwan Kim. Nature Materials (2018) doi:10.1038/s41563-017-0001-5 Published online: 22 January 2018

This paper is behind a paywall.

For the curious I have included a number of links to recent ‘memristor’ postings here,

January 22, 2018: Memristors at Masdar

January 3, 2018: Mott memristor

August 24, 2017: Neuristors and brainlike computing

June 28, 2017: Dr. Wei Lu and bio-inspired ‘memristor’ chips

May 2, 2017: Predicting how a memristor functions

December 30, 2016: Changing synaptic connectivity with a memristor

December 5, 2016: The memristor as computing device

November 1, 2016: The memristor as the ‘missing link’ in bioelectronic medicine?

You can find more by using ‘memristor’ as the search term in the blog search function or on the search engine of your choice.

University of Washington (state) is accelerating nanoscale research with Institute for Nano-Engineered Systems

A December 5, 2017 news item on Nanowerk announced a new research institute at the University of Washington (state),

The University of Washington [UW} has launched a new institute aimed at accelerating research at the nanoscale: the Institute for Nano-Engineered Systems, or NanoES. Housed in a new, multimillion-dollar facility on the UW’s Seattle campus, the institute will pursue impactful advancements in a variety of disciplines — including energy, materials science, computation and medicine. Yet these advancements will be at a technological scale a thousand times smaller than the width of a human hair.

The institute was launched at a reception Dec. 4 [2017] at its headquarters in the $87.8-million Nano Engineering and Sciences Building. During the event, speakers including UW officials and NanoES partners celebrated the NanoES mission to capitalize on the university’s strong record of research at the nanoscale and engage partners in industry at the onset of new projects.

A December 5, 2017 UW news release, which originated the news item, somewhat clarifies the declarations in the two excerpted paragraphs in the above,

The vision of the NanoES, which is part of the UW’s College of Engineering, is to act as a magnet for researchers in nanoscale science and engineering, with a focus on enabling industry partnership and entrepreneurship at the earliest stages of research projects. According to Karl Böhringer, director of the NanoES and a UW professor of electrical engineering and bioengineering, this unique approach will hasten the development of solutions to the field’s most pressing challenges: the manufacturing of scalable, high-yield nano-engineered systems for applications in information processing, energy, health and interconnected life.

“The University of Washington is well known for its expertise in nanoscale materials, processing, physics and biology — as well as its cutting-edge nanofabrication, characterization and testing facilities,” said Böhringer, who stepped down as director of the UW-based Washington Nanofabrication Facility to lead the NanoES. “NanoES will build on these strengths, bringing together people, tools and opportunities to develop nanoscale devices and systems.”

The centerpiece of the NanoES is its headquarters, the Nano Engineering and Sciences Building. The building houses 90,300 square feet of research and learning space, and was funded largely by the College of Engineering and Sound Transit. It contains an active learning classroom, a teaching laboratory and a 3,000-square-foot common area designed expressly to promote the sharing and exchanging of ideas. The remainder includes “incubator-style” office space and more than 40,000 square feet of flexible multipurpose laboratory and instrumentation space. The building’s location and design elements are intended to limit vibrations and electromagnetic interference so it can house sensitive experiments.

NanoES will house research in nanotechnology fields that hold promise for high impact, such as:

  • Augmented humanity, which includes technology to both aid and replace human capability in a way that joins user and machine as one – and foresees portable, wearable, implantable and networked technology for applications such as personalized medical care, among others.
  • Integrated photonics, which ranges from single-photon sensors for health care diagnostic tests to large-scale, integrated networks of photonic devices.
  • Scalable nanomanufacturing, which aims to develop low-cost, high-volume manufacturing processes. These would translate device prototypes constructed in research laboratories into system- and network-level nanomanufacturing methods for applications ranging from the 3-D printing of cell and tissue scaffolds to ultrathin solar cells.

A ribbon cutting ceremony.

Cutting the ribbon for the NanoES on Dec. 4. Left-to-right: Karl Böhringer, director of the NanoES and a UW professor of electrical engineering and bioengineering; Nena Golubovic, physical sciences director for IP Group; Mike Bragg, Dean of the UW College of Engineering; Jevne Micheau-Cunningham, deputy director of the NanoES.Kathryn Sauber/University of Washington

Collaborations with other UW-based institutions will provide additional resources for the NanoES. Endeavors in scalable nanomanufacturing, for example, will rely on the roll-to-roll processing facility at the UW Clean Energy Institute‘s Washington Clean Energy Testbeds or on advanced surface characterization capabilities at the Molecular Analysis Facility. In addition, the Washington Nanofabrication Facility recently completed a three-year, $37 million upgrade to raise it to an ISO Class 5 nanofabrication facility.

UW faculty and outside collaborators will build new research programs in the Nano Engineering and Sciences Building. Eric Klavins, a UW professor of electrical engineering, recently moved part of his synthetic biology research team to the building, adjacent to his collaborators in the Molecular Engineering & Sciences Institute and the Institute for Protein Design.

“We are extremely excited about the interdisciplinary and collaborative potential of the new space,” said Klavins.

The NanoES also has already produced its first spin-out company, Tunoptix, which was co-founded by Böhringer and recently received startup funding from IP Group, a U.K.-based venture capital firm.

“IP Group is very excited to work with the University of Washington,” said Nena Golubovic, physical sciences director for IP Group. “We are looking forward to the new collaborations and developments in science and technology that will grow from this new partnership.”

A woman speaking at a podium.

Nena Golubovic, physical sciences director for IP Group, delivering remarks at the Dec. 4 opening of NanoES.Kathryn Sauber/University of Washington

“We are eager to work with our partners at the IP Group to bring our technology to the market, and we appreciate their vision and investment in the NanoES Integrated Photonics Initiative,” said Tunoptix entrepreneurial lead Mike Robinson. “NanoES was the ideal environment in which to start our company.”

The NanoES leaders hope to forge similar partnerships with researchers, investors and industry leaders to develop technologies for portable, wearable, implantable and networked nanotechnologies for personalized medical care, a more efficient interconnected life and interconnected mobility. In addition to expertise, personnel and state-of-the-art research space and equipment, the NanoES will provide training, research support and key connections to capital and corporate partners.

“We believe this unique approach is the best way to drive innovations from idea to fabrication to scale-up and testing,” said Böhringer. “Some of the most promising solutions to these huge challenges are rooted in nanotechnology.”

The NanoES is supported by funds from the College of Engineering and the National Science Foundation, as well as capital investments from investors and industry partners.

You can find out more about Nano ES here.

NanoFARM: food, agriculture, and nanoparticles

The research focus for the NanoFARM consortium is on pesticides according to an October 19, 2017 news item on Nanowerk,

The answer to the growing, worldwide food production problem may have a tiny solution—nanoparticles, which are being explored as both fertilizers and fungicides for crops.

NanoFARM – research consortium formed between Carnegie Mellon University [US], the University of Kentucky [US], the University of Vienna [Austria], and Aveiro University in Prague [Czech Republic] – is studying the effects of nanoparticles on agriculture. The four universities received grants from their countries’ respective National Science Foundations to discover how these tiny particles – some just 4 nanometers in diameter – can revolutionize how farmers grow their food.

An October ??, 2017 Carnegie Mellon University news release by Adam Dove, which originated the news item, fills in a few more details,

“What we’re doing is getting a fundamental understanding of nanoparticle-to-plant interactions to enable future applications,” says Civil and Environmental Engineering (CEE) Professor Greg Lowry, the principal investigator for the nanoFARM project. “With pesticides, less than 5% goes into the crop—the rest just goes into the environment and does harmful things. What we’re trying to do is minimize that waste and corresponding environmental damage by doing a better job of targeting the delivery.”

The teams are looking at related questions: How much nanomaterial is needed to help crops when it comes to driving away pests and delivering nutrients, and how much could potentially hurt plants or surrounding ecosystems?

Applied pesticides and fertilizers are vulnerable to washing away—especially if there’s a rainstorm soon after application. But nanoparticles are not so easily washed off, making them extremely efficient for delivering micronutrients like zinc or copper to crops.

“If you put in zinc oxide nanoparticles instead, it might take days or weeks to dissolve, providing a slow, long-term delivery system.”

Gao researches the rate at which nanoparticles dissolve. His most recent finding is that nanoparticles of copper oxide take up to 20-30 days to dissolve in soil, meaning that they can deliver nutrients to plants at a steady rate over that time period.

“In many developing countries, a huge number of people are starving,” says Gao. “This kind of technology can help provide food and save energy.”

But Gao’s research is only one piece of the NanoFARM puzzle. Lowry recently traveled to Australia with Ph.D. student Eleanor Spielman-Sun to explore how differently charged nanoparticles were absorbed into wheat plants.

They learned that negatively charged particles were able to move into the veins of a plant—making them a good fit for a farmer who wanted to apply a fungicide. Neutrally charged particles went into the tissue of the leaves, which would be beneficial for growers who wanted to fortify a food with nutritional value.

Lowry said they are still a long way from signing off on a finished product for all crops—right now they are concentrating on tomato and wheat plants. But with the help of their university partners, they are slowly creating new nano-enabled agrochemicals for more efficient and environmentally friendly agriculture.

For more information, you can find the NanoFARM website here.

3-D integration of nanotechnologies on a single computer chip

By integrating nanomaterials , a new technique for a 3D computer chip capable of handling today’s huge amount of data has been developed. Weirdly, the first two paragraphs of a July 5, 2017 news item on Nanowerk do not convey the main point (Note: A link has been removed),

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature (“Three-dimensional integration of nanotechnologies for computing and data storage on a single chip”), by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

This image helps to convey the main points,

Instead of relying on silicon-based devices, a new chip uses carbon nanotubes and resistive random-access memory (RRAM) cells. The two are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. Courtesy MIT

As I hove been quite impressed with their science writing, it was a bit surprising to find that the Massachusetts Institute of Technology (MIT) had issued this news release (news item) as it didn’t follow the ‘rules’, i.e., cover as many of the journalistic questions (Who, What, Where, When, Why, and, sometimes, How) as possible in the first sentence/paragraph. This is written more in the style of a magazine article and so the details take a while to emerge, from a July 5, 2017 MIT news release, which originated the news item,

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.

This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.

“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.

“The new 3-D computer architecture provides dense and fine-grained integration of computating and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”

To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.

Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

“It leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” Rabaey says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

“One big advantage of our demonstration is that it is compatible with today’s silicon infrastructure, both in terms of fabrication and design,” says Howe.

“The fact that this strategy is both CMOS [complementary metal-oxide-semiconductor] compatible and viable for a variety of applications suggests that it is a significant step in the continued advancement of Moore’s Law,” says Ken Hansen, president and CEO of the Semiconductor Research Corporation, which supported the research. “To sustain the promise of Moore’s Law economics, innovative heterogeneous approaches are required as dimensional scaling is no longer sufficient. This pioneering work embodies that philosophy.”

The team is working to improve the underlying nanotechnologies, while exploring the new 3-D computer architecture. For Shulaker, the next step is working with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system that take advantage of its ability to carry out sensing and data processing on the same chip.

So, for example, the devices could be used to detect signs of disease by sensing particular compounds in a patient’s breath, says Shulaker.

“The technology could not only improve traditional computing, but it also opens up a whole new range of applications that we can target,” he says. “My students are now investigating how we can produce chips that do more than just computing.”

“This demonstration of the 3-D integration of sensors, memory, and logic is an exceptionally innovative development that leverages current CMOS technology with the new capabilities of carbon nanotube field–effect transistors,” says Sam Fuller, CTO emeritus of Analog Devices, who was not involved in the research. “This has the potential to be the platform for many revolutionary applications in the future.”

This work was funded by the Defense Advanced Research Projects Agency [DARPA], the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

Here’s a link to and a citation for the paper,

Three-dimensional integration of nanotechnologies for computing and data storage on a single chip by Max M. Shulaker, Gage Hills, Rebecca S. Park, Roger T. Howe, Krishna Saraswat, H.-S. Philip Wong, & Subhasish Mitra. Nature 547, 74–78 (06 July 2017) doi:10.1038/nature22994 Published online 05 July 2017

This paper is behind a paywall.

Using only sunlight to desalinate water

The researchers seem to believe that this new desalination technique could be a game changer. From a June 20, 2017 news item on Azonano,

An off-grid technology using only the energy from sunlight to transform salt water into fresh drinking water has been developed as an outcome of the effort from a federally funded research.

The desalination system uses a combination of light-harvesting nanophotonics and membrane distillation technology and is considered to be the first major innovation from the Center for Nanotechnology Enabled Water Treatment (NEWT), which is a multi-institutional engineering research center located at Rice University.

NEWT’s “nanophotonics-enabled solar membrane distillation” technology (NESMD) integrates tried-and-true water treatment methods with cutting-edge nanotechnology capable of transforming sunlight to heat. …

A June 19, 2017 Rice University news release, which originated the news item, expands on the theme,

More than 18,000 desalination plants operate in 150 countries, but NEWT’s desalination technology is unlike any other used today.

“Direct solar desalination could be a game changer for some of the estimated 1 billion people who lack access to clean drinking water,” said Rice scientist and water treatment expert Qilin Li, a corresponding author on the study. “This off-grid technology is capable of providing sufficient clean water for family use in a compact footprint, and it can be scaled up to provide water for larger communities.”

The oldest method for making freshwater from salt water is distillation. Salt water is boiled, and the steam is captured and run through a condensing coil. Distillation has been used for centuries, but it requires complex infrastructure and is energy inefficient due to the amount of heat required to boil water and produce steam. More than half the cost of operating a water distillation plant is for energy.

An emerging technology for desalination is membrane distillation, where hot salt water is flowed across one side of a porous membrane and cold freshwater is flowed across the other. Water vapor is naturally drawn through the membrane from the hot to the cold side, and because the seawater need not be boiled, the energy requirements are less than they would be for traditional distillation. However, the energy costs are still significant because heat is continuously lost from the hot side of the membrane to the cold.

“Unlike traditional membrane distillation, NESMD benefits from increasing efficiency with scale,” said Rice’s Naomi Halas, a corresponding author on the paper and the leader of NEWT’s nanophotonics research efforts. “It requires minimal pumping energy for optimal distillate conversion, and there are a number of ways we can further optimize the technology to make it more productive and efficient.”

NEWT’s new technology builds upon research in Halas’ lab to create engineered nanoparticles that harvest as much as 80 percent of sunlight to generate steam. By adding low-cost, commercially available nanoparticles to a porous membrane, NEWT has essentially turned the membrane itself into a one-sided heating element that alone heats the water to drive membrane distillation.

“The integration of photothermal heating capabilities within a water purification membrane for direct, solar-driven desalination opens new opportunities in water purification,” said Yale University ‘s Menachem “Meny” Elimelech, a co-author of the new study and NEWT’s lead researcher for membrane processes.

In the PNAS study, researchers offered proof-of-concept results based on tests with an NESMD chamber about the size of three postage stamps and just a few millimeters thick. The distillation membrane in the chamber contained a specially designed top layer of carbon black nanoparticles infused into a porous polymer. The light-capturing nanoparticles heated the entire surface of the membrane when exposed to sunlight. A thin half-millimeter-thick layer of salt water flowed atop the carbon-black layer, and a cool freshwater stream flowed below.

Li, the leader of NEWT’s advanced treatment test beds at Rice, said the water production rate increased greatly by concentrating the sunlight. “The intensity got up 17.5 kilowatts per meter squared when a lens was used to concentrate sunlight by 25 times, and the water production increased to about 6 liters per meter squared per hour.”

Li said NEWT’s research team has already made a much larger system that contains a panel that is about 70 centimeters by 25 centimeters. Ultimately, she said, NEWT hopes to produce a modular system where users could order as many panels as they needed based on their daily water demands.

“You could assemble these together, just as you would the panels in a solar farm,” she said. “Depending on the water production rate you need, you could calculate how much membrane area you would need. For example, if you need 20 liters per hour, and the panels produce 6 liters per hour per square meter, you would order a little over 3 square meters of panels.”

Established by the National Science Foundation in 2015, NEWT aims to develop compact, mobile, off-grid water-treatment systems that can provide clean water to millions of people who lack it and make U.S. energy production more sustainable and cost-effective. NEWT, which is expected to leverage more than $40 million in federal and industrial support over the next decade, is the first NSF Engineering Research Center (ERC) in Houston and only the third in Texas since NSF began the ERC program in 1985. NEWT focuses on applications for humanitarian emergency response, rural water systems and wastewater treatment and reuse at remote sites, including both onshore and offshore drilling platforms for oil and gas exploration.

There is a video but it is focused on the NEWT center rather than any specific water technologies,

For anyone interested in the technology, here’s a link to and a citation for the researchers’ paper,

Nanophotonics-enabled solar membrane distillation for off-grid water purification by Pratiksha D. Dongare, Alessandro Alabastri, Seth Pedersen, Katherine R. Zodrow, Nathaniel J. Hogan, Oara Neumann, Jinjian Wu, Tianxiao Wang, Akshay Deshmukh,f, Menachem Elimelech, Qilin Li, Peter Nordlander, and Naomi J. Halas. PNAS {Proceedings of the National Academy of Sciences] doi: 10.1073/pnas.1701835114 June 19, 2017

This paper appears to be open access.

4D printing, what is that?

According to an April 12, 2017 news item on ScienceDaily, shapeshifting in response to environmental stimuli is the fourth dimension (I have a link to a posting about 4D printing with another fourth dimension),

A team of researchers from Georgia Institute of Technology and two other institutions has developed a new 3-D printing method to create objects that can permanently transform into a range of different shapes in response to heat.

The team, which included researchers from the Singapore University of Technology and Design (SUTD) and Xi’an Jiaotong University in China, created the objects by printing layers of shape memory polymers with each layer designed to respond differently when exposed to heat.

“This new approach significantly simplifies and increases the potential of 4-D printing by incorporating the mechanical programming post-processing step directly into the 3-D printing process,” said Jerry Qi, a professor in the George W. Woodruff School of Mechanical Engineering at Georgia Tech. “This allows high-resolution 3-D printed components to be designed by computer simulation, 3-D printed, and then directly and rapidly transformed into new permanent configurations by simply heating.”

The research was reported April 12 [2017] in the journal Science Advances, a publication of the American Association for the Advancement of Science. The work is funded by the U.S. Air Force Office of Scientific Research, the U.S. National Science Foundation and the Singapore National Research Foundation through the SUTD DManD Centre.

An April 12, 2017 Singapore University of Technology and Design (SUTD) press release on EurekAlert provides more detail,

4D printing is an emerging technology that allows a 3D-printed component to transform its structure by exposing it to heat, light, humidity, or other environmental stimuli. This technology extends the shape creation process beyond 3D printing, resulting in additional design flexibility that can lead to new types of products which can adjust its functionality in response to the environment, in a pre-programmed manner. However, 4D printing generally involves complex and time-consuming post-processing steps to mechanically programme the component. Furthermore, the materials are often limited to soft polymers, which limit their applicability in structural scenarios.

A group of researchers from the SUTD, Georgia Institute of Technology, Xi’an Jiaotong University and Zhejiang University has introduced an approach that significantly simplifies and increases the potential of 4D printing by incorporating the mechanical programming post-processing step directly into the 3D printing process. This allows high-resolution 3D-printed components to be designed by computer simulation, 3D printed, and then directly and rapidly transformed into new permanent configurations by using heat. This approach can help save printing time and materials used by up to 90%, while completely eliminating the time-consuming mechanical programming process from the design and manufacturing workflow.

“Our approach involves printing composite materials where at room temperature one material is soft but can be programmed to contain internal stress, and the other material is stiff,” said Dr. Zhen Ding of SUTD. “We use computational simulations to design composite components where the stiff material has a shape and size that prevents the release of the programmed internal stress from the soft material after 3D printing. Upon heating, the stiff material softens and allows the soft material to release its stress. This results in a change – often dramatic – in the product shape.” This new shape is fixed when the product is cooled, with good mechanical stiffness. The research demonstrated many interesting shape changing parts, including a lattice that can expand by almost 8 times when heated.

This new shape becomes permanent and the composite material will not return to its original 3D-printed shape, upon further heating or cooling. “This is because of the shape memory effect,” said Prof. H. Jerry Qi of Georgia Tech. “In the two-material composite design, the stiff material exhibits shape memory, which helps lock the transformed shape into a permanent one. Additionally, the printed structure also exhibits the shape memory effect, i.e. it can then be programmed into further arbitrary shapes that can always be recovered to its new permanent shape, but not its 3D-printed shape.”

Said SUTD’s Prof. Martin Dunn, “The key advance of this work, is a 4D printing method that is dramatically simplified and allows the creation of high-resolution complex 3D reprogrammable products; it promises to enable myriad applications across biomedical devices, 3D electronics, and consumer products. It even opens the door to a new paradigm in product design, where components are designed from the onset to inhabit multiple configurations during service.”

Here’s a video,


Uploaded on Apr 17, 2017

A research team led by the Singapore University of Technology and Design’s (SUTD) Associate Provost of Research, Professor Martin Dunn, has come up with a new and simplified 4D printing method that uses a 3D printer to rapidly create 3D objects, which can permanently transform into a range of different shapes in response to heat.

Here’s a link to and a citation for the paper,

Direct 4D printing via active composite materials by Zhen Ding, Chao Yuan, Xirui Peng, Tiejun Wang, H. Jerry Qi, and Martin L. Dunn. Science Advances  12 Apr 2017: Vol. 3, no. 4, e1602890 DOI: 10.1126/sciadv.1602890

This paper is open access.

Here is a link to a post about another 4th dimension, time,

4D printing: a hydrogel orchid (Jan. 28, 2016)

Biodegradable nanoparticles to program immune cells for cancer treatments

The Fred Hutchinson Cancer Research Centre in Seattle, Washington has announced a proposed cancer treatment using nanoparticle-programmed T cells according to an April 12, 2017 news release (received via email; also on EurekAlert), Note: A link has been removed,

Researchers at Fred Hutchinson Cancer Research Center have developed biodegradable nanoparticles that can be used to genetically program immune cells to recognize and destroy cancer cells — while the immune cells are still inside the body.

In a proof-of-principle study to be published April 17 [2017] in Nature Nanotechnology, the team showed that nanoparticle-programmed immune cells, known as T cells, can rapidly clear or slow the progression of leukemia in a mouse model.

“Our technology is the first that we know of to quickly program tumor-recognizing capabilities into T cells without extracting them for laboratory manipulation,” said Fred Hutch’s Dr. Matthias Stephan, the study’s senior author. “The reprogrammed cells begin to work within 24 to 48 hours and continue to produce these receptors for weeks. This suggests that our technology has the potential to allow the immune system to quickly mount a strong enough response to destroy cancerous cells before the disease becomes fatal.”

Cellular immunotherapies have shown promise in clinical trials, but challenges remain to making them more widely available and to being able to deploy them quickly. At present, it typically takes a couple of weeks to prepare these treatments: the T cells must be removed from the patient and genetically engineered and grown in special cell processing facilities before they are infused back into the patient. These new nanoparticles could eliminate the need for such expensive and time consuming steps.

Although his T-cell programming method is still several steps away from the clinic, Stephan imagines a future in which nanoparticles transform cell-based immunotherapies — whether for cancer or infectious disease — into an easily administered, off-the-shelf treatment that’s available anywhere.

“I’ve never had cancer, but if I did get a cancer diagnosis I would want to start treatment right away,” Stephan said. “I want to make cellular immunotherapy a treatment option the day of diagnosis and have it able to be done in an outpatient setting near where people live.”

The body as a genetic engineering lab

Stephan created his T-cell homing nanoparticles as a way to bring the power of cellular cancer immunotherapy to more people.

In his method, the laborious, time-consuming T-cell programming steps all take place within the body, creating a potential army of “serial killers” within days.

As reported in the new study, Stephan and his team developed biodegradable nanoparticles that turned T cells into CAR T cells, a particular type of cellular immunotherapy that has delivered promising results against leukemia in clinical trials.

The researchers designed the nanoparticles to carry genes that encode for chimeric antigen receptors, or CARs, that target and eliminate cancer. They also tagged the nanoparticles with molecules that make them stick like burrs to T cells, which engulf the nanoparticles. The cell’s internal traffic system then directs the nanoparticle to the nucleus, and it dissolves.

The study provides proof-of-principle that the nanoparticles can educate the immune system to target cancer cells. Stephan and his team designed the new CAR genes to integrate into chromosomes housed in the nucleus, making it possible for T cells to begin decoding the new genes and producing CARs within just one or two days.

Once the team determined that their CAR-carrying nanoparticles reprogrammed a noticeable percent of T cells, they tested their efficacy. Using a preclinical mouse model of leukemia, Stephan and his colleagues compared their nanoparticle-programming strategy against chemotherapy followed by an infusion of T cells programmed in the lab to express CARs, which mimics current CAR-T-cell therapy.

The nanoparticle-programmed CAR-T cells held their own against the infused CAR-T cells. Treatment with nanoparticles or infused CAR-T cells improved survival 58 days on average, up from a median survival of about two weeks.

The study was funded by Fred Hutch’s Immunotherapy Initiative, the Leukemia & Lymphoma Society, the Phi Beta Psi Sorority, the National Science Foundation and the National Cancer Institute.

Next steps and other applications

Stephan’s nanoparticles still have to clear several hurdles before they get close to human trials. He’s pursuing new strategies to make the gene-delivery-and-expression system safe in people and working with companies that have the capacity to produce clinical-grade nanoparticles. Additionally, Stephan has turned his sights to treating solid tumors and is collaborating to this end with several research groups at Fred Hutch.

And, he said, immunotherapy may be just the beginning. In theory, nanoparticles could be modified to serve the needs of patients whose immune systems need a boost, but who cannot wait for several months for a conventional vaccine to kick in.

“We hope that this can be used for infectious diseases like hepatitis or HIV,” Stephan said. This method may be a way to “provide patients with receptors they don’t have in their own body,” he explained. “You just need a tiny number of programmed T cells to protect against a virus.”

Here’s a link to and a citation for the paper,

In situ programming of leukaemia-specific T cells using synthetic DNA nanocarriers by Tyrel T. Smith, Sirkka B. Stephan, Howell F. Moffett, Laura E. McKnight, Weihang Ji, Diana Reiman, Emmy Bonagofski, Martin E. Wohlfahrt, Smitha P. S. Pillai, & Matthias T. Stephan. Nature Nanotechnology (2017) doi:10.1038/nnano.2017.57 Published online 17 April 2017

This paper is behind a paywall.