Tag Archives: Massachusetts Institute of Technology

Robots and a new perspective on disability

I’ve long wondered about how disabilities would be viewed in a future (h/t May 4, 2017 news item on phys.org) where technology could render them largely irrelevant. A May 4, 2017 essay by Thusha (Gnanthusharan) Rajendran of Heriot-Watt University on TheConversation.com provides a perspective on the possibilities (Note: Links have been removed),

When dealing with the otherness of disability, the Victorians in their shame built huge out-of-sight asylums, and their legacy of “them” and “us” continues to this day. Two hundred years later, technologies offer us an alternative view. The digital age is shattering barriers, and what used to the norm is now being challenged.

What if we could change the environment, rather than the person? What if a virtual assistant could help a visually impaired person with their online shopping? And what if a robot “buddy” could help a person with autism navigate the nuances of workplace politics? These are just some of the questions that are being asked and which need answers as the digital age challenges our perceptions of normality.

The treatment of people with developmental conditions has a chequered history. In towns and cities across Britain, you will still see large Victorian buildings that were once places to “look after” people with disabilities, that is, remove them from society. Things became worse still during the time of the Nazis with an idealisation of the perfect and rejection of Darwin’s idea of natural diversity.

Today we face similar challenges about differences versus abnormalities. Arguably, current diagnostic systems do not help, because they diagnose the person and not “the system”. So, a child has challenging behaviour, rather than being in distress; the person with autism has a communication disorder rather than simply not being understood.

Natural-born cyborgs

In contrast, the digital world is all about systems. The field of human-computer interaction is about how things work between humans and computers or robots. Philosopher Andy Clark argues that humans have always been natural-born cyborgs – that is, we have always used technology (in its broadest sense) to improve ourselves.

The most obvious example is language itself. In the digital age we can become truly digitally enhanced. How many of us Google something rather than remembering it? How do you feel when you have no access to wi-fi? How much do we favour texting, tweeting and Facebook over face-to-face conversations? How much do we love and need our smartphones?

In the new field of social robotics, my colleagues and I are developing a robot buddy to help adults with autism to understand, for example, if their boss is pleased or displeased with their work. For many adults with autism, it is not the work itself that stops from them from having successful careers, it is the social environment surrounding work. From the stress-inducing interview to workplace politics, the modern world of work is a social minefield. It is not easy, at times, for us neurotypticals, but for a person with autism it is a world full contradictions and implied meaning.

Rajendra goes on to highlight efforts with autistic individuals; he also includes this video of his December 14, 2016 TEDx Heriot-Watt University talk, which largely focuses on his work with robots and autism  (Note: This runs approximately 15 mins.),

The talk reminded me of a Feb. 6, 2017 posting (scroll down about 33% of the way) where I discussed a recent book about science communication and its failure to recognize the importance of pop culture in that endeavour. As an example, I used a then recent announcement from MIT (Massachusetts Institute of Technology) about their emotion detection wireless application and the almost simultaneous appearance of that application in a Feb. 2, 2017 episode of The Big Bang Theory (a popular US television comedy) featuring a character who could be seen as autistic making use of the emotion detection device.

In any event, the work described in the MIT news release is very similar to Rajendra’s albeit the communication is delivered to the public through entirely different channels: TEDx Talk and TheConversation.com (channels aimed at academics and those with academic interests) and a pop culture television comedy with broad appeal.

An explanation of neural networks from the Massachusetts Institute of Technology (MIT)

I always enjoy the MIT ‘explainers’ and have been a little sad that I haven’t stumbled across one in a while. Until now, that is. Here’s an April 14, 201 neural network ‘explainer’ (in its entirety) by Larry Hardesty (?),

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department.

Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory.

The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.

“There’s this idea that ideas in science are a bit like epidemics of viruses,” says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for Brain Research, and director of MIT’s Center for Brains, Minds, and Machines. “There are apparently five or six basic strains of flu viruses, and apparently each one comes back with a period of around 25 years. People get infected, and they develop an immune response, and so they don’t get infected for the next 25 years. And then there is a new generation that is ready to be infected by the same strain of virus. In science, people fall in love with an idea, get excited about it, hammer it to death, and then get immunized — they get tired of it. So ideas should have the same kind of periodicity!”

Weighty matters

Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. Usually, the examples have been hand-labeled in advance. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels.

Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organized into layers of nodes, and they’re “feed-forward,” meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.

To each of its incoming connections, a node will assign a number known as a “weight.” When the network is active, the node receives a different data item — a different number — over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node “fires,” which in today’s neural nets generally means sending the number — the sum of the weighted inputs — along all its outgoing connections.

When a neural net is being trained, all of its weights and thresholds are initially set to random values. Training data is fed to the bottom layer — the input layer — and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs.

Minds and machines

The neural nets described by McCullough and Pitts in 1944 had thresholds and weights, but they weren’t arranged into layers, and the researchers didn’t specify any training mechanism. What McCullough and Pitts showed was that a neural net could, in principle, compute any function that a digital computer could. The result was more neuroscience than computer science: The point was to suggest that the human brain could be thought of as a computing device.

Neural nets continue to be a valuable tool for neuroscientific research. For instance, particular network layouts or rules for adjusting weights and thresholds have reproduced observed features of human neuroanatomy and cognition, an indication that they capture something about how the brain processes information.

The first trainable neural network, the Perceptron, was demonstrated by the Cornell University psychologist Frank Rosenblatt in 1957. The Perceptron’s design was much like that of the modern neural net, except that it had only one layer with adjustable weights and thresholds, sandwiched between input and output layers.

Perceptrons were an active area of research in both psychology and the fledgling discipline of computer science until 1959, when Minsky and Papert published a book titled “Perceptrons,” which demonstrated that executing certain fairly common computations on Perceptrons would be impractically time consuming.

“Of course, all of these limitations kind of disappear if you take machinery that is a little more complicated — like, two layers,” Poggio says. But at the time, the book had a chilling effect on neural-net research.

“You have to put these things in historical context,” Poggio says. “They were arguing for programming — for languages like Lisp. Not many years before, people were still using analog computers. It was not clear at all at the time that programming was the way to go. I think they went a little bit overboard, but as usual, it’s not black and white. If you think of this as this competition between analog computing and digital computing, they fought for what at the time was the right thing.”

Periodicity

By the 1980s, however, researchers had developed algorithms for modifying neural nets’ weights and thresholds that were efficient enough for networks with more than one layer, removing many of the limitations identified by Minsky and Papert. The field enjoyed a renaissance.

But intellectually, there’s something unsatisfying about neural nets. Enough training may revise a network’s settings to the point that it can usefully classify data, but what do those settings mean? What image features is an object recognizer looking at, and how does it piece them together into the distinctive visual signatures of cars, houses, and coffee cups? Looking at the weights of individual connections won’t answer that question.

In recent years, computer scientists have begun to come up with ingenious methods for deducing the analytic strategies adopted by neural nets. But in the 1980s, the networks’ strategies were indecipherable. So around the turn of the century, neural networks were supplanted by support vector machines, an alternative approach to machine learning that’s based on some very clean and elegant mathematics.

The recent resurgence in neural networks — the deep-learning revolution — comes courtesy of the computer-game industry. The complex imagery and rapid pace of today’s video games require hardware that can keep up, and the result has been the graphics processing unit (GPU), which packs thousands of relatively simple processing cores on a single chip. It didn’t take long for researchers to realize that the architecture of a GPU is remarkably like that of a neural net.

Modern GPUs enabled the one-layer networks of the 1960s and the two- to three-layer networks of the 1980s to blossom into the 10-, 15-, even 50-layer networks of today. That’s what the “deep” in “deep learning” refers to — the depth of the network’s layers. And currently, deep learning is responsible for the best-performing systems in almost every area of artificial-intelligence research.

Under the hood

The networks’ opacity is still unsettling to theorists, but there’s headway on that front, too. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center’s research program in Theoretical Frameworks for Intelligence. Recently, Poggio and his CBMM colleagues have released a three-part theoretical study of neural networks.

The first part, which was published last month in the International Journal of Automation and Computing, addresses the range of computations that deep-learning networks can execute and when deep networks offer advantages over shallower ones. Parts two and three, which have been released as CBMM technical reports, address the problems of global optimization, or guaranteeing that a network has found the settings that best accord with its training data, and overfitting, or cases in which the network becomes so attuned to the specifics of its training data that it fails to generalize to other instances of the same categories.

There are still plenty of theoretical questions to be answered, but CBMM researchers’ work could help ensure that neural networks finally break the generational cycle that has brought them in and out of favor for seven decades.

This image from MIT illustrates a ‘modern’ neural network,

Most applications of deep learning use “convolutional” neural networks, in which the nodes of each layer are clustered, the clusters overlap, and each cluster feeds data to multiple nodes (orange and green) of the next layer. Image: Jose-Luis Olivares/MIT

h/t phys.org April 17, 2017

One final note, I wish the folks at MIT had an ‘explainer’ archive. I’m not sure how to find any more ‘explainers on MIT’s website.

Worm-inspired gel material and soft robots

The Nereis virens worm inspired new research out of the MIT Laboratory for Atomistic and Molecular Mechanics. Its jaw is made of soft organic material, but is as strong as harder materials such as human dentin. Photo: Alexander Semenov/Wikimedia Commons

What an amazing worm! Here’s more about robots inspired by the Nereis virens worm in a March 20, 2017 news item on Nanowerk,

A new material that naturally adapts to changing environments was inspired by the strength, stability, and mechanical performance of the jaw of a marine worm. The protein material, which was designed and modeled by researchers from the Laboratory for Atomistic and Molecular Mechanics (LAMM) in the Department of Civil and Environmental Engineering (CEE) [at the Massachusetts Institute of Technology {MIT}], and synthesized in collaboration with the Air Force Research Lab (AFRL) at Wright-Patterson Air Force Base, Ohio, expands and contracts based on changing pH levels and ion concentrations. It was developed by studying how the jaw of Nereis virens, a sand worm, forms and adapts in different environments.

The resulting pH- and ion-sensitive material is able to respond and react to its environment. Understanding this naturally-occurring process can be particularly helpful for active control of the motion or deformation of actuators for soft robotics and sensors without using external power supply or complex electronic controlling devices. It could also be used to build autonomous structures.

A March 20, 2017 MIT news release, which originated the news item, provides more detail,

“The ability of dramatically altering the material properties, by changing its hierarchical structure starting at the chemical level, offers exciting new opportunities to tune the material, and to build upon the natural material design towards new engineering applications,” wrote Markus J. Buehler, the McAfee Professor of Engineering, head of CEE, and senior author of the paper.

The research, recently published in ACS Nano, shows that depending on the ions and pH levels in the environment, the protein material expands and contracts into different geometric patterns. When the conditions change again, the material reverts back to its original shape. This makes it particularly useful for smart composite materials with tunable mechanics and self-powered roboticists that use pH value and ion condition to change the material stiffness or generate functional deformations.

Finding inspiration in the strong, stable jaw of a marine worm

In order to create bio-inspired materials that can be used for soft robotics, sensors, and other uses — such as that inspired by the Nereis — engineers and scientists at LAMM and AFRL needed to first understand how these materials form in the Nereis worm, and how they ultimately behave in various environments. This understanding involved the development of a model that encompasses all different length scales from the atomic level, and is able to predict the material behavior. This model helps to fully understand the Nereis worm and its exceptional strength.

“Working with AFRL gave us the opportunity to pair our atomistic simulations with experiments,” said CEE research scientist Francisco Martin-Martinez. AFRL experimentally synthesized a hydrogel, a gel-like material made mostly of water, which is composed of recombinant Nvjp-1 protein responsible for the structural stability and impressive mechanical performance of the Nereis jaw. The hydrogel was used to test how the protein shrinks and changes behavior based on pH and ions in the environment.

The Nereis jaw is mostly made of organic matter, meaning it is a soft protein material with a consistency similar to gelatin. In spite of this, its strength, which has been reported to have a hardness ranging between 0.4 and 0.8 gigapascals (GPa), is similar to that of harder materials like human dentin. “It’s quite remarkable that this soft protein material, with a consistency akin to Jell-O, can be as strong as calcified minerals that are found in human dentin and harder materials such as bones,” Buehler said.

At MIT, the researchers looked at the makeup of the Nereis jaw on a molecular scale to see what makes the jaw so strong and adaptive. At this scale, the metal-coordinated crosslinks, the presence of metal in its molecular structure, provide a molecular network that makes the material stronger and at the same time make the molecular bond more dynamic, and ultimately able to respond to changing conditions. At the macroscopic scale, these dynamic metal-protein bonds result in an expansion/contraction behavior.

Combining the protein structural studies from AFRL with the molecular understanding from LAMM, Buehler, Martin-Martinez, CEE Research Scientist Zhao Qin, and former PhD student Chia-Ching Chou ’15, created a multiscale model that is able to predict the mechanical behavior of materials that contain this protein in various environments. “These atomistic simulations help us to visualize the atomic arrangements and molecular conformations that underlay the mechanical performance of these materials,” Martin-Martinez said.

Specifically, using this model the research team was able to design, test, and visualize how different molecular networks change and adapt to various pH levels, taking into account the biological and mechanical properties.

By looking at the molecular and biological makeup of a the Nereis virens and using the predictive model of the mechanical behavior of the resulting protein material, the LAMM researchers were able to more fully understand the protein material at different scales and provide a comprehensive understanding of how such protein materials form and behave in differing pH settings. This understanding guides new material designs for soft robots and sensors.

Identifying the link between environmental properties and movement in the material

The predictive model explained how the pH sensitive materials change shape and behavior, which the researchers used for designing new PH-changing geometric structures. Depending on the original geometric shape tested in the protein material and the properties surrounding it, the LAMM researchers found that the material either spirals or takes a Cypraea shell-like shape when the pH levels are changed. These are only some examples of the potential that this new material could have for developing soft robots, sensors, and autonomous structures.

Using the predictive model, the research team found that the material not only changes form, but it also reverts back to its original shape when the pH levels change. At the molecular level, histidine amino acids present in the protein bind strongly to the ions in the environment. This very local chemical reaction between amino acids and metal ions has an effect in the overall conformation of the protein at a larger scale. When environmental conditions change, the histidine-metal interactions change accordingly, which affect the protein conformation and in turn the material response.

“Changing the pH or changing the ions is like flipping a switch. You switch it on or off, depending on what environment you select, and the hydrogel expands or contracts” said Martin-Martinez.

LAMM found that at the molecular level, the structure of the protein material is strengthened when the environment contains zinc ions and certain pH levels. This creates more stable metal-coordinated crosslinks in the material’s molecular structure, which makes the molecules more dynamic and flexible.

This insight into the material’s design and its flexibility is extremely useful for environments with changing pH levels. Its response of changing its figure to changing acidity levels could be used for soft robotics. “Most soft robotics require power supply to drive the motion and to be controlled by complex electronic devices. Our work toward designing of multifunctional material may provide another pathway to directly control the material property and deformation without electronic devices,” said Qin.

By studying and modeling the molecular makeup and the behavior of the primary protein responsible for the mechanical properties ideal for Nereis jaw performance, the LAMM researchers are able to link environmental properties to movement in the material and have a more comprehensive understanding of the strength of the Nereis jaw.

Here’s link to and a citation for the paper,

Ion Effect and Metal-Coordinated Cross-Linking for Multiscale Design of Nereis Jaw Inspired Mechanomutable Materials by Chia-Ching Chou, Francisco J. Martin-Martinez, Zhao Qin, Patrick B. Dennis, Maneesh K. Gupta, Rajesh R. Naik, and Markus J. Buehler. ACS Nano, 2017, 11 (2), pp 1858–1868 DOI: 10.1021/acsnano.6b07878 Publication Date (Web): February 6, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

Tree-on-a-chip

It’s usually organ-on-a-chip or lab-on-a-chip or human-on-a-chip; this is my first tree-on-a-chip.

Engineers have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and other plants. Courtesy: MIT

From a March 20, 2017 news item on phys.org,

Trees and other plants, from towering redwoods to diminutive daisies, are nature’s hydraulic pumps. They are constantly pulling water up from their roots to the topmost leaves, and pumping sugars produced by their leaves back down to the roots. This constant stream of nutrients is shuttled through a system of tissues called xylem and phloem, which are packed together in woody, parallel conduits.

Now engineers at MIT [Massachusetts Institute of Technology] and their collaborators have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and plants. Like its natural counterparts, the chip operates passively, requiring no moving parts or external pumps. It is able to pump water and sugars through the chip at a steady flow rate for several days. The results are published this week in Nature Plants.

A March 20, 2017 MIT news release by Jennifer Chu, which originated the news item, describes the work in more detail,

Anette “Peko” Hosoi, professor and associate department head for operations in MIT’s Department of Mechanical Engineering, says the chip’s passive pumping may be leveraged as a simple hydraulic actuator for small robots. Engineers have found it difficult and expensive to make tiny, movable parts and pumps to power complex movements in small robots. The team’s new pumping mechanism may enable robots whose motions are propelled by inexpensive, sugar-powered pumps.

“The goal of this work is cheap complexity, like one sees in nature,” Hosoi says. “It’s easy to add another leaf or xylem channel in a tree. In small robotics, everything is hard, from manufacturing, to integration, to actuation. If we could make the building blocks that enable cheap complexity, that would be super exciting. I think these [microfluidic pumps] are a step in that direction.”

Hosoi’s co-authors on the paper are lead author Jean Comtet, a former graduate student in MIT’s Department of Mechanical Engineering; Kaare Jensen of the Technical University of Denmark; and Robert Turgeon and Abraham Stroock, both of Cornell University.

A hydraulic lift

The group’s tree-inspired work grew out of a project on hydraulic robots powered by pumping fluids. Hosoi was interested in designing hydraulic robots at the small scale, that could perform actions similar to much bigger robots like Boston Dynamic’s Big Dog, a four-legged, Saint Bernard-sized robot that runs and jumps over rough terrain, powered by hydraulic actuators.

“For small systems, it’s often expensive to manufacture tiny moving pieces,” Hosoi says. “So we thought, ‘What if we could make a small-scale hydraulic system that could generate large pressures, with no moving parts?’ And then we asked, ‘Does anything do this in nature?’ It turns out that trees do.”

The general understanding among biologists has been that water, propelled by surface tension, travels up a tree’s channels of xylem, then diffuses through a semipermeable membrane and down into channels of phloem that contain sugar and other nutrients.

The more sugar there is in the phloem, the more water flows from xylem to phloem to balance out the sugar-to-water gradient, in a passive process known as osmosis. The resulting water flow flushes nutrients down to the roots. Trees and plants are thought to maintain this pumping process as more water is drawn up from their roots.

“This simple model of xylem and phloem has been well-known for decades,” Hosoi says. “From a qualitative point of view, this makes sense. But when you actually run the numbers, you realize this simple model does not allow for steady flow.”

In fact, engineers have previously attempted to design tree-inspired microfluidic pumps, fabricating parts that mimic xylem and phloem. But they found that these designs quickly stopped pumping within minutes.

It was Hosoi’s student Comtet who identified a third essential part to a tree’s pumping system: its leaves, which produce sugars through photosynthesis. Comtet’s model includes this additional source of sugars that diffuse from the leaves into a plant’s phloem, increasing the sugar-to-water gradient, which in turn maintains a constant osmotic pressure, circulating water and nutrients continuously throughout a tree.

Running on sugar

With Comtet’s hypothesis in mind, Hosoi and her team designed their tree-on-a-chip, a microfluidic pump that mimics a tree’s xylem, phloem, and most importantly, its sugar-producing leaves.

To make the chip, the researchers sandwiched together two plastic slides, through which they drilled small channels to represent xylem and phloem. They filled the xylem channel with water, and the phloem channel with water and sugar, then separated the two slides with a semipermeable material to mimic the membrane between xylem and phloem. They placed another membrane over the slide containing the phloem channel, and set a sugar cube on top to represent the additional source of sugar diffusing from a tree’s leaves into the phloem. They hooked the chip up to a tube, which fed water from a tank into the chip.

With this simple setup, the chip was able to passively pump water from the tank through the chip and out into a beaker, at a constant flow rate for several days, as opposed to previous designs that only pumped for several minutes.

“As soon as we put this sugar source in, we had it running for days at a steady state,” Hosoi says. “That’s exactly what we need. We want a device we can actually put in a robot.”

Hosoi envisions that the tree-on-a-chip pump may be built into a small robot to produce hydraulically powered motions, without requiring active pumps or parts.

“If you design your robot in a smart way, you could absolutely stick a sugar cube on it and let it go,” Hosoi says.

This research was supported, in part, by the Defense Advance Research Projects Agency [DARPA].

This research’s funding connection to DARPA reminded me that MIT has an Institute of Soldier Nanotechnologies.

Getting back to the tree-on-a-chip, here’s a link to and a citation for the paper,

Passive phloem loading and long-distance transport in a synthetic tree-on-a-chip by Jean Comtet, Kaare H. Jensen, Robert Turgeon, Abraham D. Stroock & A. E. Hosoi. Nature Plants 3, Article number: 17032 (2017)  doi:10.1038/nplants.2017.32 Published online: 20 March 2017

This paper is behind a paywall.

Formation of a time (temporal) crystal

It’s a crystal arranged in time according to a March 8, 2017 University of Texas at Austin news release (also on EurekAlert), Note: Links have been removed,

Salt, snowflakes and diamonds are all crystals, meaning their atoms are arranged in 3-D patterns that repeat. Today scientists are reporting in the journal Nature on the creation of a phase of matter, dubbed a time crystal, in which atoms move in a pattern that repeats in time rather than in space.

The atoms in a time crystal never settle down into what’s known as thermal equilibrium, a state in which they all have the same amount of heat. It’s one of the first examples of a broad new class of matter, called nonequilibrium phases, that have been predicted but until now have remained out of reach. Like explorers stepping onto an uncharted continent, physicists are eager to explore this exotic new realm.

“This opens the door to a whole new world of nonequilibrium phases,” says Andrew Potter, an assistant professor of physics at The University of Texas at Austin. “We’ve taken these theoretical ideas that we’ve been poking around for the last couple of years and actually built it in the laboratory. Hopefully, this is just the first example of these, with many more to come.”

Some of these nonequilibrium phases of matter may prove useful for storing or transferring information in quantum computers.

Potter is part of the team led by researchers at the University of Maryland who successfully created the first time crystal from ions, or electrically charged atoms, of the element ytterbium. By applying just the right electrical field, the researchers levitated 10 of these ions above a surface like a magician’s assistant. Next, they whacked the atoms with a laser pulse, causing them to flip head over heels. Then they hit them again and again in a regular rhythm. That set up a pattern of flips that repeated in time.

Crucially, Potter noted, the pattern of atom flips repeated only half as fast as the laser pulses. This would be like pounding on a bunch of piano keys twice a second and notes coming out only once a second. This weird quantum behavior was a signature that he and his colleagues predicted, and helped confirm that the result was indeed a time crystal.

The team also consists of researchers at the National Institute of Standards and Technology, the University of California, Berkeley and Harvard University, in addition to the University of Maryland and UT Austin.

Frank Wilczek, a Nobel Prize-winning physicist at the Massachusetts Institute of Technology, was teaching a class about crystals in 2012 when he wondered whether a phase of matter could be created such that its atoms move in a pattern that repeats in time, rather than just in space.

Potter and his colleague Norman Yao at UC Berkeley created a recipe for building such a time crystal and developed ways to confirm that, once you had built such a crystal, it was in fact the real deal. That theoretical work was announced publically last August and then published in January in the journal Physical Review Letters.

A team led by Chris Monroe of the University of Maryland in College Park built a time crystal, and Potter and Yao helped confirm that it indeed had the properties they predicted. The team announced that breakthrough—constructing a working time crystal—last September and is publishing the full, peer-reviewed description today in Nature.

A team led by Mikhail Lukin at Harvard University created a second time crystal a month after the first team, in that case, from a diamond.

Here’s a link to and a citation for the paper,

Observation of a discrete time crystal by J. Zhang, P. W. Hess, A. Kyprianidis, P. Becker, A. Lee, J. Smith, G. Pagano, I.-D. Potirniche, A. C. Potter, A. Vishwanath, N. Y. Yao, & C. Monroe. Nature 543, 217–220 (09 March 2017) doi:10.1038/nature21413 Published online 08 March 2017

This paper is behind a paywall.

3D printing with cellulose

The scientists seem quite excited about their work with 3D printing and cellulose. From a March 3, 2017 MIT (Massachusetts Institute of Technology) news release (also on EurekAlert),

For centuries, cellulose has formed the basis of the world’s most abundantly printed-on material: paper. Now, thanks to new research at MIT, it may also become an abundant material to print with — potentially providing a renewable, biodegradable alternative to the polymers currently used in 3-D printing materials.

“Cellulose is the most abundant organic polymer in the world,” says MIT postdoc Sebastian Pattinson, lead author of a paper describing the new system in the journal Advanced Materials Technologies. The paper is co-authored by associate professor of mechanical engineering A. John Hart, the Mitsui Career Development Professor in Contemporary Technology.

Cellulose, Pattinson explains, is “the most important component in giving wood its mechanical properties. And because it’s so inexpensive, it’s biorenewable, biodegradable, and also very chemically versatile, it’s used in a lot of products. Cellulose and its derivatives are used in pharmaceuticals, medical devices, as food additives, building materials, clothing — all sorts of different areas. And a lot of these kinds of products would benefit from the kind of customization that additive manufacturing [3-D printing] enables.”

Meanwhile, 3-D printing technology is rapidly growing. Among other benefits, it “allows you to individually customize each product you make,” Pattinson says.

Using cellulose as a material for additive manufacturing is not a new idea, and many researchers have attempted this but faced major obstacles. When heated, cellulose thermally decomposes before it becomes flowable, partly because of the hydrogen bonds that exist between the cellulose molecules. The intermolecular bonding also makes high-concentration cellulose solutions too viscous to easily extrude.

Instead, the MIT team chose to work with cellulose acetate — a material that is easily made from cellulose and is already widely produced and readily available. Essentially, the number of hydrogen bonds in this material has been reduced by the acetate groups. Cellulose acetate can be dissolved in acetone and extruded through a nozzle. As the acetone quickly evaporates, the cellulose acetate solidifies in place. A subsequent optional treatment replaces the acetate groups and increases the strength of the printed parts.

“After we 3-D print, we restore the hydrogen bonding network through a sodium hydroxide treatment,” Pattinson says. “We find that the strength and toughness of the parts we get … are greater than many commonly used materials” for 3-D printing, including acrylonitrile butadiene styrene (ABS) and polylactic acid (PLA).

To demonstrate the chemical versatility of the production process, Pattinson and Hart added an extra dimension to the innovation. By adding a small amount of antimicrobial dye to the cellulose acetate ink, they 3-D-printed a pair of surgical tweezers with antimicrobial functionality.

“We demonstrated that the parts kill bacteria when you shine fluorescent light on them,” Pattinson says. Such custom-made tools “could be useful for remote medical settings where there’s a need for surgical tools but it’s difficult to deliver new tools as they break, or where there’s a need for customized tools. And with the antimicrobial properties, if the sterility of the operating room is not ideal the antimicrobial function could be essential,” he says.

Because most existing extrusion-based 3-D printers rely on heating polymer to make it flow, their production speed is limited by the amount of heat that can be delivered to the polymer without damaging it. This room-temperature cellulose process, which simply relies on evaporation of the acetone to solidify the part, could potentially be faster, Pattinson says. And various methods could speed it up even further, such as laying down thin ribbons of material to maximize surface area, or blowing hot air over it to speed evaporation. A production system would also seek to recover the evaporated acetone to make the process more cost effective and environmentally friendly.

Cellulose acetate is already widely available as a commodity product. In bulk, the material is comparable in price to that of thermoplastics used for injection molding, and it’s much less expensive than the typical filament materials used for 3-D printing, the researchers say. This, combined with the room-temperature conditions of the process and the ability to functionalize cellulose in a variety of ways, could make it commercially attractive.

Here’s a link to and a citation for the paper,

Additive Manufacturing of Cellulosic Materials with Robust Mechanics and Antimicrobial Functionality by Sebastian W. Pattinson and A. John Hart. Advanced Materials Technologies DOI: 10.1002/admt.201600084 Version of Record online: 30 JAN 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

CRISPR patent decision: Harvard’s and MIT’s Broad Institute victorious—for now

I have written about the CRISPR patent tussle (Harvard & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley) previously in a Jan. 6, 2015 posting and in a more detailed May 14, 2015 posting. I also mentioned (in a Jan. 17, 2017 posting) CRISPR and its patent issues in the context of a posting about a Slate.com series on Frankenstein and the novel’s applicability to our own time. This patent fight is being bitterly fought as fortunes are at stake.

It seems a decision has been made regarding the CRISPR patent claims. From a Feb. 17, 2017 article by Charmaine Distor for The Science Times,

After an intense court battle, the US Patent and Trademark Office (USPTO) released its ruling on February 15 [2017]. The rights for the CRISPR-Cas9 gene editing technology was handed over to the Broad Institute of Harvard University and the Massachusetts Institute of Technology (MIT).

According to an article in Nature, the said court battle was between the Broad Institute and the University of California. The two institutions are fighting over the intellectual property right for the CRISPR patent. The case between the two started when the patent was first awarded to the Broad Institute despite having the University of California apply first for the CRISPR patent.

Heidi Ledford’s Feb. 17, 2017 article for Nature provides more insight into the situation (Note: Links have been removed),

It [USPTO] ruled that the Broad Institute of Harvard and MIT in Cambridge could keep its patents on using CRISPR–Cas9 in eukaryotic cells. That was a blow to the University of California in Berkeley, which had filed its own patents and had hoped to have the Broad’s thrown out.

The fight goes back to 2012, when Jennifer Doudna at Berkeley, Emmanuelle Charpentier, then at the University of Vienna, and their colleagues outlined how CRISPR–Cas9 could be used to precisely cut isolated DNA1. In 2013, Feng Zhang at the Broad and his colleagues — and other teams — showed2 how it could be adapted to edit DNA in eukaryotic cells such as plants, livestock and humans.

Berkeley filed for a patent earlier, but the USPTO granted the Broad’s patents first — and this week upheld them. There are high stakes involved in the ruling. The holder of key patents could make millions of dollars from CRISPR–Cas9’s applications in industry: already, the technique has sped up genetic research, and scientists are using it to develop disease-resistant livestock and treatments for human diseases.

But the fight for patent rights to CRISPR technology is by no means over. Here are four reasons why.

1. Berkeley can appeal the ruling

2. European patents are still up for grabs

3. Other parties are also claiming patent rights on CRISPR–Cas9

4. CRISPR technology is moving beyond what the patents cover

As for Ledford’s 3rd point, there are an estimated 763 patent families (groups of related patents) claiming CAS9 leading to the distinct possibility that the Broad Institute will be fighting many patent claims in the future.

Once you’ve read Distor’s and Ledford’s articles, you may want to check out Adam Rogers’ and Eric Niiler’s Feb. 16, 2017 CRISPR patent article for Wired,

The fight over who owns the most promising technique for editing genes—cutting and pasting the stuff of life to cure disease and advance scientific knowledge—has been a rough one. A team on the West Coast, at UC Berkeley, filed patents on the method, Crispr-Cas9; a team on the East Coast, based at MIT and the Broad Institute, filed their own patents in 2014 after Berkeley’s, but got them granted first. The Berkeley group contended that this constituted “interference,” and that Berkeley deserved the patent.

At stake: millions, maybe billions of dollars in biotech money and licensing fees, the future of medicine, the future of bioscience. Not nothing. Who will benefit depends on who owns the patents.

On Wednesday [Feb. 15, 2017], the US Patent Trial and Appeal Board kind of, sort of, almost began to answer that question. Berkeley will get the patent for using the system called Crispr-Cas9 in any living cell, from bacteria to blue whales. Broad/MIT gets the patent in eukaryotic cells, which is to say, plants and animals.

It’s … confusing. “The patent that the Broad received is for the use of Crispr gene-editing technology in eukaryotic cells. The patent for the University of California is for all cells,” says Jennifer Doudna, the UC geneticist and co-founder of Caribou Biosciences who co-invented Crispr, on a conference call. Her metaphor: “They have a patent on green tennis balls; we have a patent for all tennis balls.”

Observers didn’t quite buy that topspin. If Caribou is playing tennis, it’s looking like Broad/MIT is Serena Williams.

“UC does not necessarily lose everything, but they’re no doubt spinning the story,” says Robert Cook-Deegan, an expert in genetic policy at Arizona State University’s School for the Future of Innovation in Society. “UC’s claims to eukaryotic uses of Crispr-Cas9 will not be granted in the form they sought. That’s a big deal, and UC was the big loser.”

UC officials said Wednesday [Feb. 15, 2017] that they are studying the 51-page decision and considering whether to appeal. That leaves members of the biotechnology sector wondering who they will have to pay to use Crispr as part of a business—and scientists hoping the outcome won’t somehow keep them from continuing their research.

….

Happy reading!

New iron oxide nanoparticle as an MRI (magnetic resonance imaging) contrast agent

This high-resolution transmission electron micrograph of particles made by the research team shows the particles’ highly uniform size and shape. These are iron oxide particles just 3 nanometers across, coated with a zwitterion layer. Their small size means they can easily be cleared through the kidneys after injection. Courtesy of the researchers

A Feb. 14, 2017 news item on ScienceDaily announces a new MRI (magnetic resonance imaging) contrast agent,

A new, specially coated iron oxide nanoparticle developed by a team at MIT [Massachusetts Institute of Technology] and elsewhere could provide an alternative to conventional gadolinium-based contrast agents used for magnetic resonance imaging (MRI) procedures. In rare cases, the currently used gadolinium agents have been found to produce adverse effects in patients with impaired kidney function.

A Feb. 14, 2017 MIT news release (also on EurekAlert), which originated the news item, provides more technical detail,

 

The advent of MRI technology, which is used to observe details of specific organs or blood vessels, has been an enormous boon to medical diagnostics over the last few decades. About a third of the 60 million MRI procedures done annually worldwide use contrast-enhancing agents, mostly containing the element gadolinium. While these contrast agents have mostly proven safe over many years of use, some rare but significant side effects have shown up in a very small subset of patients. There may soon be a safer substitute thanks to this new research.

In place of gadolinium-based contrast agents, the researchers have found that they can produce similar MRI contrast with tiny nanoparticles of iron oxide that have been treated with a zwitterion coating. (Zwitterions are molecules that have areas of both positive and negative electrical charges, which cancel out to make them neutral overall.) The findings are being published this week in the Proceedings of the National Academy of Sciences, in a paper by Moungi Bawendi, the Lester Wolfe Professor of Chemistry at MIT; He Wei, an MIT postdoc; Oliver Bruns, an MIT research scientist; Michael Kaul at the University Medical Center Hamburg-Eppendorf in Germany; and 15 others.

Contrast agents, injected into the patient during an MRI procedure and designed to be quickly cleared from the body by the kidneys afterwards, are needed to make fine details of organ structures, blood vessels, and other specific tissues clearly visible in the images. Some agents produce dark areas in the resulting image, while others produce light areas. The primary agents for producing light areas contain gadolinium.

Iron oxide particles have been largely used as negative (dark) contrast agents, but radiologists vastly prefer positive (light) contrast agents such as gadolinium-based agents, as negative contrast can sometimes be difficult to distinguish from certain imaging artifacts and internal bleeding. But while the gadolinium-based agents have become the standard, evidence shows that in some very rare cases they can lead to an untreatable condition called nephrogenic systemic fibrosis, which can be fatal. In addition, evidence now shows that the gadolinium can build up in the brain, and although no effects of this buildup have yet been demonstrated, the FDA is investigating it for potential harm.

“Over the last decade, more and more side effects have come to light” from the gadolinium agents, Bruns says, so that led the research team to search for alternatives. “None of these issues exist for iron oxide,” at least none that have yet been detected, he says.

The key new finding by this team was to combine two existing techniques: making very tiny particles of iron oxide, and attaching certain molecules (called surface ligands) to the outsides of these particles to optimize their characteristics. The iron oxide inorganic core is small enough to produce a pronounced positive contrast in MRI, and the zwitterionic surface ligand, which was recently developed by Wei and coworkers in the Bawendi research group, makes the iron oxide particles water-soluble, compact, and biocompatible.

The combination of a very tiny iron oxide core and an ultrathin ligand shell leads to a total hydrodynamic diameter of 4.7 nanometers, below the 5.5-nanometer renal clearance threshold. This means that the coated iron oxide should quickly clear through the kidneys and not accumulate. This renal clearance property is an important feature where the particles perform comparably to gadolinium-based contrast agents.

Now that initial tests have demonstrated the particles’ effectiveness as contrast agents, Wei and Bruns say the next step will be to do further toxicology testing to show the particles’ safety, and to continue to improve the characteristics of the material. “It’s not perfect. We have more work to do,” Bruns says. But because iron oxide has been used for so long and in so many ways, even as an iron supplement, any negative effects could likely be treated by well-established protocols, the researchers say. If all goes well, the team is considering setting up a startup company to bring the material to production.

For some patients who are currently excluded from getting MRIs because of potential side effects of gadolinium, the new agents “could allow those patients to be eligible again” for the procedure, Bruns says. And, if it does turn out that the accumulation of gadolinium in the brain has negative effects, an overall phase-out of gadolinium for such uses could be needed. “If that turned out to be the case, this could potentially be a complete replacement,” he says.

Ralph Weissleder, a physician at Massachusetts General Hospital who was not involved in this work, says, “The work is of high interest, given the limitations of gadolinium-based contrast agents, which typically have short vascular half-lives and may be contraindicated in renally compromised patients.”

The research team included researchers in MIT’s chemistry, biological engineering, nuclear science and engineering, brain and cognitive sciences, and materials science and engineering departments and its program in Health Sciences and Technology; and at the University Medical Center Hamburg-Eppendorf; Brown University; and the Massachusetts General Hospital. It was supported by the MIT-Harvard NIH Center for Cancer Nanotechnology, the Army Research Office through MIT’s Institute for Soldier Nanotechnologies, the NIH-funded Laser Biomedical Research Center, the MIT Deshpande Center, and the European Union Seventh Framework Program.

Here’s a link to and a citation for the paper,

Exceedingly small iron oxide nanoparticles as positive MRI contrast agents by He Wei, Oliver T. Bruns, Michael G. Kaul, Eric C. Hansen, Mariya Barch, Agata Wiśniowsk, Ou Chen, Yue Chen, Nan Li, Satoshi Okada, Jose M. Cordero, Markus Heine, Christian T. Farrar, Daniel M. Montana, Gerhard Adam, Harald Ittrich, Alan Jasanoff, Peter Nielsen, and Moungi G. Bawendi. PNAS February 13, 2017 doi: 10.1073/pnas.1620145114 Published online before print February 13, 2017

This paper is behind a paywall.

R.I.P. Mildred Dresselhaus, Queen of Carbon

I’ve been hearing about Mildred Dresselhaus, professor emerita (retired professor) at the Massachusetts Institute of Technology (MIT), just about as long as I’ve been researching and writing about nanotechnology (about 10 years total* including the work for my master’s project with the almost eight years on this blog).

She died on Monday, Feb. 20, 2017 at the age of 86 having broken through barriers for those of her gender, barriers for her subject area, and barriers for her age.

Mark Anderson in his Feb. 22, 2017 obituary for the IEEE (Institute of Electrical and Electronics Engineers) Spectrum website provides a brief overview of her extraordinary life and accomplishments,

Called the “Queen of Carbon Science,” Dresselhaus pioneered the study of carbon nanostructures at a time when studying physical and material properties of commonplace atoms like carbon was out of favor. Her visionary perspectives on the sixth atom in the periodic table—including exploring individual layers of carbon atoms (precursors to graphene), developing carbon fibers stronger than steel, and revealing new carbon structures that were ultimately developed into buckyballs and nanotubes—invigorated the field.

“Millie Dresselhaus began life as the child of poor Polish immigrants in the Bronx; by the end, she was Institute Professor Emerita, the highest distinction awarded by the MIT faculty. A physicist, materials scientist, and electrical engineer, she was known as the ‘Queen of Carbon’ because her work paved the way for much of today’s carbon-based nanotechnology,” MIT president Rafael Reif said in a prepared statement.

Friends and colleagues describe Dresselhaus as a gifted instructor as well as a tireless and inspired researcher. And her boundless generosity toward colleagues, students, and girls and women pursuing careers in science is legendary.

In 1963, Dresselhaus began her own career studying carbon by publishing a paper on graphite in the IBM Journal for Research and Development, a foundational work in the history of nanotechnology. To this day, her studies of the electronic structure of this material serve as a reference point for explorations of the electronic structure of fullerenes and carbon nanotubes. Coauthor, with her husband Gene Dresselhaus, of a leading book on carbon fibers, she began studying the laser vaporation of carbon and the “carbon clusters” that resulted. Researchers who followed her lead discovered a 60-carbon structure that was soon identified as the icosahedral “soccer ball” molecular configuration known as buckminsterfullerene, or buckyball. In 1991, Dresselhaus further suggested that fullerene could be elongated as a tube, and she outlined these imagined objects’ symmetries. Not long after, researchers announced the discovery of carbon nanotubes.

When she began her nearly half-century career at MIT, as a visiting professor, women consisted of just 4 percent of the undergraduate student population.  So Dresselhaus began working toward the improvement of living conditions for women students at the university. Through her leadership, MIT adopted an equal and joint admission process for women and men. (Previously, MIT had propounded the self-fulfilling prophecy of harboring more stringent requirements for women based on less dormitory space and perceived poorer performance.) And so promoting women in STEM—before it was ever called STEM—became one of her passions. Serving as president of the American Physical Society, she spearheaded and launched initiatives like the Committee on the Status of Women in Physics and the society’s more informal committees of visiting women physicists on campuses around the United States, which have increased the female faculty and student populations on the campuses they visit.

If you have the time, please read Anderson’s piece in its entirety.

One fact that has impressed me greatly is that Dresselhaus kept working into her eighties. I featured a paper she published in an April 27, 2012 posting at the age of 82 and she was described in the MIT write up at the time as a professor, not a professor emerita. I later featured Dresselhaus in a May 31, 2012 posting when she was awarded the Kavli Prize for Nanoscience.

It seems she worked almost to the end. Recently, GE (General Electric) posted a video “What If Scientists Were Celebrities?” starring Mildred Dresselhaus,

H/t Mark Anderson’s obituary Feb. 22, 2017 piece. The video was posted on Feb. 8, 2017.

Goodbye to the Queen of Carbon!

*The word ‘total’ added on March 14, 2022.

Fusing graphene flakes for 3D graphene structures that are 10x as strong as steel

A Jan. 6, 2017 news item on Nanowerk describes how geometry may have as much or more to do with the strength of 3D graphene structures than the graphene used to create them,

A team of researchers at MIT [Massachusetts Institute of Technology] has designed one of the strongest lightweight materials known, by compressing and fusing flakes of graphene, a two-dimensional form of carbon. The new material, a sponge-like configuration with a density of just 5 percent, can have a strength 10 times that of steel.

In its two-dimensional form, graphene is thought to be the strongest of all known materials. But researchers until now have had a hard time translating that two-dimensional strength into useful three-dimensional materials.

The new findings show that the crucial aspect of the new 3-D forms has more to do with their unusual geometrical configuration than with the material itself, which suggests that similar strong, lightweight materials could be made from a variety of materials by creating similar geometric features.

The findings are being reported today [Jan. 6, 2017\ in the journal Science Advances, in a paper by Markus Buehler, the head of MIT’s Department of Civil and Environmental Engineering (CEE) and the McAfee Professor of Engineering; Zhao Qin, a CEE research scientist; Gang Seob Jung, a graduate student; and Min Jeong Kang MEng ’16, a recent graduate.

A Jan. 6, 2017 MIT news release (also on EurekAlert), which originated the news item, describes the research in more detail,

Other groups had suggested the possibility of such lightweight structures, but lab experiments so far had failed to match predictions, with some results exhibiting several orders of magnitude less strength than expected. The MIT team decided to solve the mystery by analyzing the material’s behavior down to the level of individual atoms within the structure. They were able to produce a mathematical framework that very closely matches experimental observations.

Two-dimensional materials — basically flat sheets that are just one atom in thickness but can be indefinitely large in the other dimensions — have exceptional strength as well as unique electrical properties. But because of their extraordinary thinness, “they are not very useful for making 3-D materials that could be used in vehicles, buildings, or devices,” Buehler says. “What we’ve done is to realize the wish of translating these 2-D materials into three-dimensional structures.”

The team was able to compress small flakes of graphene using a combination of heat and pressure. This process produced a strong, stable structure whose form resembles that of some corals and microscopic creatures called diatoms. These shapes, which have an enormous surface area in proportion to their volume, proved to be remarkably strong. “Once we created these 3-D structures, we wanted to see what’s the limit — what’s the strongest possible material we can produce,” says Qin. To do that, they created a variety of 3-D models and then subjected them to various tests. In computational simulations, which mimic the loading conditions in the tensile and compression tests performed in a tensile loading machine, “one of our samples has 5 percent the density of steel, but 10 times the strength,” Qin says.

Buehler says that what happens to their 3-D graphene material, which is composed of curved surfaces under deformation, resembles what would happen with sheets of paper. Paper has little strength along its length and width, and can be easily crumpled up. But when made into certain shapes, for example rolled into a tube, suddenly the strength along the length of the tube is much greater and can support substantial weight. Similarly, the geometric arrangement of the graphene flakes after treatment naturally forms a very strong configuration.

The new configurations have been made in the lab using a high-resolution, multimaterial 3-D printer. They were mechanically tested for their tensile and compressive properties, and their mechanical response under loading was simulated using the team’s theoretical models. The results from the experiments and simulations matched accurately.

The new, more accurate results, based on atomistic computational modeling by the MIT team, ruled out a possibility proposed previously by other teams: that it might be possible to make 3-D graphene structures so lightweight that they would actually be lighter than air, and could be used as a durable replacement for helium in balloons. The current work shows, however, that at such low densities, the material would not have sufficient strength and would collapse from the surrounding air pressure.

But many other possible applications of the material could eventually be feasible, the researchers say, for uses that require a combination of extreme strength and light weight. “You could either use the real graphene material or use the geometry we discovered with other materials, like polymers or metals,” Buehler says, to gain similar advantages of strength combined with advantages in cost, processing methods, or other material properties (such as transparency or electrical conductivity).

“You can replace the material itself with anything,” Buehler says. “The geometry is the dominant factor. It’s something that has the potential to transfer to many things.”

The unusual geometric shapes that graphene naturally forms under heat and pressure look something like a Nerf ball — round, but full of holes. These shapes, known as gyroids, are so complex that “actually making them using conventional manufacturing methods is probably impossible,” Buehler says. The team used 3-D-printed models of the structure, enlarged to thousands of times their natural size, for testing purposes.

For actual synthesis, the researchers say, one possibility is to use the polymer or metal particles as templates, coat them with graphene by chemical vapor deposit before heat and pressure treatments, and then chemically or physically remove the polymer or metal phases to leave 3-D graphene in the gyroid form. For this, the computational model given in the current study provides a guideline to evaluate the mechanical quality of the synthesis output.

The same geometry could even be applied to large-scale structural materials, they suggest. For example, concrete for a structure such a bridge might be made with this porous geometry, providing comparable strength with a fraction of the weight. This approach would have the additional benefit of providing good insulation because of the large amount of enclosed airspace within it.

Because the shape is riddled with very tiny pore spaces, the material might also find application in some filtration systems, for either water or chemical processing. The mathematical descriptions derived by this group could facilitate the development of a variety of applications, the researchers say.

“This is an inspiring study on the mechanics of 3-D graphene assembly,” says Huajian Gao, a professor of engineering at Brown University, who was not involved in this work. “The combination of computational modeling with 3-D-printing-based experiments used in this paper is a powerful new approach in engineering research. It is impressive to see the scaling laws initially derived from nanoscale simulations resurface in macroscale experiments under the help of 3-D printing,” he says.

This work, Gao says, “shows a promising direction of bringing the strength of 2-D materials and the power of material architecture design together.”

There’s a video describing the work,

Here’s a link to and a citation for the paper,

The mechanics and design of a lightweight three-dimensional graphene assembly by Zhao Qin, Gang Seob Jung, Min Jeong Kang, and Markus J. Buehler. Science Advances  06 Jan 2017: Vol. 3, no. 1, e1601536 DOI: 10.1126/sciadv.1601536  04 January 2017

This paper appears to be open access.