Tag Archives: Cornell University

US Dept. of Agriculture announces its nanotechnology research grants

I don’t always stumble across the US Department of Agriculture’s nanotechnology research grant announcements but I’m always grateful when I do as it’s good to find out about  nanotechnology research taking place in the agricultural sector. From a July 21, 2017 news item on Nanowerk,,

The U.S. Department of Agriculture’s (USDA) National Institute of Food and Agriculture (NIFA) today announced 13 grants totaling $4.6 million for research on the next generation of agricultural technologies and systems to meet the growing demand for food, fuel, and fiber. The grants are funded through NIFA’s Agriculture and Food Research Initiative (AFRI), authorized by the 2014 Farm Bill.

“Nanotechnology is being rapidly implemented in medicine, electronics, energy, and biotechnology, and it has huge potential to enhance the agricultural sector,” said NIFA Director Sonny Ramaswamy. “NIFA research investments can help spur nanotechnology-based improvements to ensure global nutritional security and prosperity in rural communities.”

A July 20, 2017 USDA news release, which originated the news item, lists this year’s grants and provides a brief description of a few of the newly and previously funded projects,

Fiscal year 2016 grants being announced include:

Nanotechnology for Agricultural and Food Systems

  • Kansas State University, Manhattan, Kansas, $450,200
  • Wichita State University, Wichita, Kansas, $340,000
  • University of Massachusetts, Amherst, Massachusetts, $444,550
  • University of Nevada, Las Vegas, Nevada,$150,000
  • North Dakota State University, Fargo, North Dakota, $149,000
  • Cornell University, Ithaca, New York, $455,000
  • Cornell University, Ithaca, New York, $450,200
  • Oregon State University, Corvallis, Oregon, $402,550
  • University of Pennsylvania, Philadelphia, Pennsylvania, $405,055
  • Gordon Research Conferences, West Kingston, Rhode Island, $45,000
  • The University of Tennessee,  Knoxville, Tennessee, $450,200
  • Utah State University, Logan, Utah, $450,200
  • The George Washington University, Washington, D.C., $450,200

Project details can be found at the NIFA website (link is external).

Among the grants, a University of Pennsylvania project will engineer cellulose nanomaterials [emphasis mine] with high toughness for potential use in building materials, automotive components, and consumer products. A University of Nevada-Las Vegas project will develop a rapid, sensitive test to detect Salmonella typhimurium to enhance food supply safety.

Previously funded grants include an Iowa State University project in which a low-cost and disposable biosensor made out of nanoparticle graphene that can detect pesticides in soil was developed. The biosensor also has the potential for use in the biomedical, environmental, and food safety fields. University of Minnesota (link is external) researchers created a sponge that uses nanotechnology to quickly absorb mercury, as well as bacterial and fungal microbes from polluted water. The sponge can be used on tap water, industrial wastewater, and in lakes. It converts contaminants into nontoxic waste that can be disposed in a landfill.

NIFA invests in and advances agricultural research, education, and extension and promotes transformative discoveries that solve societal challenges. NIFA support for the best and brightest scientists and extension personnel has resulted in user-inspired, groundbreaking discoveries that combat childhood obesity, improve and sustain rural economic growth, address water availability issues, increase food production, find new sources of energy, mitigate climate variability and ensure food safety. To learn more about NIFA’s impact on agricultural science, visit www.nifa.usda.gov/impacts, sign up for email updates (link is external) or follow us on Twitter @USDA_NIFA (link is external), #NIFAImpacts (link is external).

Given my interest in nanocellulose materials (Canada was/is a leader in the production of cellulose nanocrystals [CNC] but there has been little news about Canadian research into CNC applications), I used the NIFA link to access the table listing the grants and clicked on ‘brief’ in the View column in the University of Pennsylania row to find this description of the project,

ENGINEERING CELLULOSE NANOMATERIALS WITH HIGH TOUGHNESS

NON-TECHNICAL SUMMARY: Cellulose nanofibrils (CNFs) are natural materials with exceptional mechanical properties that can be obtained from renewable plant-based resources. CNFs are stiff, strong, and lightweight, thus they are ideal for use in structural materials. In particular, there is a significant opportunity to use CNFs to realize polymer composites with improved toughness and resistance to fracture. The overall goal of this project is to establish an understanding of fracture toughness enhancement in polymer composites reinforced with CNFs. A key outcome of this work will be process – structure – fracture property relationships for CNF-reinforced composites. The knowledge developed in this project will enable a new class of tough CNF-reinforced composite materials with applications in areas such as building materials, automotive components, and consumer products.The composite materials that will be investigated are at the convergence of nanotechnology and bio-sourced material trends. Emerging nanocellulose technologies have the potential to move biomass materials into high value-added applications and entirely new markets.

It’s not the only nanocellulose material project being funded in this round, there’s this at North Dakota State University, from the NIFA ‘brief’ project description page,

NOVEL NANOCELLULOSE BASED FIRE RETARDANT FOR POLYMER COMPOSITES

NON-TECHNICAL SUMMARY: Synthetic polymers are quite vulnerable to fire.There are 2.4 million reported fires, resulting in 7.8 billion dollars of direct property loss, an estimated 30 billion dollars of indirect loss, 29,000 civilian injuries, 101,000 firefighter injuries and 6000 civilian fatalities annually in the U.S. There is an urgent need for a safe, potent, and reliable fire retardant (FR) system that can be used in commodity polymers to reduce their flammability and protect lives and properties. The goal of this project is to develop a novel, safe and biobased FR system using agricultural and woody biomass. The project is divided into three major tasks. The first is to manufacture zinc oxide (ZnO) coated cellulose nanoparticles and evaluate their morphological, chemical, structural and thermal characteristics. The second task will be to design and manufacture polymer composites containing nano sized zinc oxide and cellulose crystals. Finally the third task will be to test the fire retardancy and mechanical properties of the composites. Wbelieve that presence of zinc oxide and cellulose nanocrystals in polymers will limit the oxygen supply by charring, shielding the surface and cellulose nanocrystals will make composites strong. The outcome of this project will help in developing a safe, reliable and biobased fire retardant for consumer goods, automotive, building products and will help in saving human lives and property damage due to fire.

One day, I hope to hear about Canadian research into applications for nanocellulose materials. (fingers crossed for good luck)

IBM to build brain-inspired AI supercomputing system equal to 64 million neurons for US Air Force

This is the second IBM computer announcement I’ve stumbled onto within the last 4 weeks or so,  which seems like a veritable deluge given the last time I wrote about IBM’s computing efforts was in an Oct. 8, 2015 posting about carbon nanotubes,. I believe that up until now that was my  most recent posting about IBM and computers.

Moving onto the news, here’s more from a June 23, 3017 news item on Nanotechnology Now,

IBM (NYSE: IBM) and the U.S. Air Force Research Laboratory (AFRL) today [June 23, 2017] announced they are collaborating on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The system’s advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb – a mere 10 watts to power.

A June 23, 2017 IBM news release, which originated the news item, describes the proposed collaboration, which is based on IBM’s TrueNorth brain-inspired chip architecture (see my Aug. 8, 2014 posting for more about TrueNorth),

IBM researchers believe the brain-inspired, neural network design of TrueNorth will be far more efficient for pattern recognition and integrated sensory processing than systems powered by conventional chips. AFRL is investigating applications of the system in embedded, mobile, autonomous settings where, today, size, weight and power (SWaP) are key limiting factors.

The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this “right-brain” perception capability of the system with the “left-brain” symbol processing capabilities of conventional computer systems. The large scale of the system will enable both “data parallelism” where multiple data sources can be run in parallel against the same neural network and “model parallelism” where independent neural networks form an ensemble that can be run in parallel on the same data.

“AFRL was the earliest adopter of TrueNorth for converting data into decisions,” said Daniel S. Goddard, director, information directorate, U.S. Air Force Research Lab. “The new neurosynaptic system will be used to enable new computing capabilities important to AFRL’s mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage.”

“The evolution of the IBM TrueNorth Neurosynaptic System is a solid proof point in our quest to lead the industry in AI hardware innovation,” said Dharmendra S. Modha, IBM Fellow, chief scientist, brain-inspired computing, IBM Research – Almaden. “Over the last six years, IBM has expanded the number of neurons per system from 256 to more than 64 million – an 800 percent annual increase over six years.’’

The system fits in a 4U-high (7”) space in a standard server rack and eight such systems will enable the unprecedented scale of 512 million neurons per rack. A single processor in the system consists of 5.4 billion transistors organized into 4,096 neural cores creating an array of 1 million digital neurons that communicate with one another via 256 million electrical synapses.    For CIFAR-100 dataset, TrueNorth achieves near state-of-the-art accuracy, while running at >1,500 frames/s and using 200 mW (effectively >7,000 frames/s per Watt) – orders of magnitude lower speed and energy than a conventional computer running inference on the same neural network.

The IBM TrueNorth Neurosynaptic System was originally developed under the auspices of Defense Advanced Research Projects Agency’s (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program in collaboration with Cornell University. In 2016, the TrueNorth Team received the inaugural Misha Mahowald Prize for Neuromorphic Engineering and TrueNorth was accepted into the Computer History Museum.  Research with TrueNorth is currently being performed by more than 40 universities, government labs, and industrial partners on five continents.

There is an IBM video accompanying this news release, which seems more promotional than informational,

The IBM scientist featured in the video has a Dec. 19, 2016 posting on an IBM research blog which provides context for this collaboration with AFRL,

2016 was a big year for brain-inspired computing. My team and I proved in our paper “Convolutional networks for fast, energy-efficient neuromorphic computing” that the value of this breakthrough is that it can perform neural network inference at unprecedented ultra-low energy consumption. Simply stated, our TrueNorth chip’s non-von Neumann architecture mimics the brain’s neural architecture — giving it unprecedented efficiency and scalability over today’s computers.

The brain-inspired TrueNorth processor [is] a 70mW reconfigurable silicon chip with 1 million neurons, 256 million synapses, and 4096 parallel and distributed neural cores. For systems, we present a scale-out system loosely coupling 16 single-chip boards and a scale-up system tightly integrating 16 chips in a 4´4 configuration by exploiting TrueNorth’s native tiling.

For the scale-up systems we summarize our approach to physical placement of neural network, to reduce intra- and inter-chip network traffic. The ecosystem is in use at over 30 universities and government / corporate labs. Our platform is a substrate for a spectrum of applications from mobile and embedded computing to cloud and supercomputers.
TrueNorth Ecosystem for Brain-Inspired Computing: Scalable Systems, Software, and Applications

TrueNorth, once loaded with a neural network model, can be used in real-time as a sensory streaming inference engine, performing rapid and accurate classifications while using minimal energy. TrueNorth’s 1 million neurons consume only 70 mW, which is like having a neurosynaptic supercomputer the size of a postage stamp that can run on a smartphone battery for a week.

Recently, in collaboration with Lawrence Livermore National Laboratory, U.S. Air Force Research Laboratory, and U.S. Army Research Laboratory, we published our fifth paper at IEEE’s prestigious Supercomputing 2016 conference that summarizes the results of the team’s 12.5-year journey (see the associated graphic) to unlock this value proposition. [keep scrolling for the graphic]

Applying the mind of a chip

Three of our partners, U.S. Army Research Lab, U.S. Air Force Research Lab and Lawrence Livermore National Lab, contributed sections to the Supercomputing paper each showcasing a different TrueNorth system, as summarized by my colleagues Jun Sawada, Brian Taba, Pallab Datta, and Ben Shaw:

U.S. Army Research Lab (ARL) prototyped a computational offloading scheme to illustrate how TrueNorth’s low power profile enables computation at the point of data collection. Using the single-chip NS1e board and an Android tablet, ARL researchers created a demonstration system that allows visitors to their lab to hand write arithmetic expressions on the tablet, with handwriting streamed to the NS1e for character recognition, and recognized characters sent back to the tablet for arithmetic calculation.

Of course, the point here is not to make a handwriting calculator, it is to show how TrueNorth’s low power and real time pattern recognition might be deployed at the point of data collection to reduce latency, complexity and transmission bandwidth, as well as back-end data storage requirements in distributed systems.

U.S. Air Force Research Lab (AFRL) contributed another prototype application utilizing a TrueNorth scale-out system to perform a data-parallel text extraction and recognition task. In this application, an image of a document is segmented into individual characters that are streamed to AFRL’s NS1e16 TrueNorth system for parallel character recognition. Classification results are then sent to an inference-based natural language model to reconstruct words and sentences. This system can process 16,000 characters per second! AFRL plans to implement the word and sentence inference algorithms on TrueNorth, as well.

Lawrence Livermore National Lab (LLNL) has a 16-chip NS16e scale-up system to explore the potential of post-von Neumann computation through larger neural models and more complex algorithms, enabled by the native tiling characteristics of the TrueNorth chip. For the Supercomputing paper, they contributed a single-chip application performing in-situ process monitoring in an additive manufacturing process. LLNL trained a TrueNorth network to recognize seven classes related to track weld quality in welds produced by a selective laser melting machine. Real-time weld quality determination allows for closed-loop process improvement and immediate rejection of defective parts. This is one of several applications LLNL is developing to showcase TrueNorth as a scalable platform for low-power, real-time inference.

[downloaded from https://www.ibm.com/blogs/research/2016/12/the-brains-architecture-efficiency-on-a-chip/] Courtesy: IBM

I gather this 2017 announcement is the latest milestone on the TrueNorth journey.

An explanation of neural networks from the Massachusetts Institute of Technology (MIT)

I always enjoy the MIT ‘explainers’ and have been a little sad that I haven’t stumbled across one in a while. Until now, that is. Here’s an April 14, 201 neural network ‘explainer’ (in its entirety) by Larry Hardesty (?),

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department.

Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory.

The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.

“There’s this idea that ideas in science are a bit like epidemics of viruses,” says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for Brain Research, and director of MIT’s Center for Brains, Minds, and Machines. “There are apparently five or six basic strains of flu viruses, and apparently each one comes back with a period of around 25 years. People get infected, and they develop an immune response, and so they don’t get infected for the next 25 years. And then there is a new generation that is ready to be infected by the same strain of virus. In science, people fall in love with an idea, get excited about it, hammer it to death, and then get immunized — they get tired of it. So ideas should have the same kind of periodicity!”

Weighty matters

Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. Usually, the examples have been hand-labeled in advance. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels.

Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organized into layers of nodes, and they’re “feed-forward,” meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.

To each of its incoming connections, a node will assign a number known as a “weight.” When the network is active, the node receives a different data item — a different number — over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node “fires,” which in today’s neural nets generally means sending the number — the sum of the weighted inputs — along all its outgoing connections.

When a neural net is being trained, all of its weights and thresholds are initially set to random values. Training data is fed to the bottom layer — the input layer — and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs.

Minds and machines

The neural nets described by McCullough and Pitts in 1944 had thresholds and weights, but they weren’t arranged into layers, and the researchers didn’t specify any training mechanism. What McCullough and Pitts showed was that a neural net could, in principle, compute any function that a digital computer could. The result was more neuroscience than computer science: The point was to suggest that the human brain could be thought of as a computing device.

Neural nets continue to be a valuable tool for neuroscientific research. For instance, particular network layouts or rules for adjusting weights and thresholds have reproduced observed features of human neuroanatomy and cognition, an indication that they capture something about how the brain processes information.

The first trainable neural network, the Perceptron, was demonstrated by the Cornell University psychologist Frank Rosenblatt in 1957. The Perceptron’s design was much like that of the modern neural net, except that it had only one layer with adjustable weights and thresholds, sandwiched between input and output layers.

Perceptrons were an active area of research in both psychology and the fledgling discipline of computer science until 1959, when Minsky and Papert published a book titled “Perceptrons,” which demonstrated that executing certain fairly common computations on Perceptrons would be impractically time consuming.

“Of course, all of these limitations kind of disappear if you take machinery that is a little more complicated — like, two layers,” Poggio says. But at the time, the book had a chilling effect on neural-net research.

“You have to put these things in historical context,” Poggio says. “They were arguing for programming — for languages like Lisp. Not many years before, people were still using analog computers. It was not clear at all at the time that programming was the way to go. I think they went a little bit overboard, but as usual, it’s not black and white. If you think of this as this competition between analog computing and digital computing, they fought for what at the time was the right thing.”

Periodicity

By the 1980s, however, researchers had developed algorithms for modifying neural nets’ weights and thresholds that were efficient enough for networks with more than one layer, removing many of the limitations identified by Minsky and Papert. The field enjoyed a renaissance.

But intellectually, there’s something unsatisfying about neural nets. Enough training may revise a network’s settings to the point that it can usefully classify data, but what do those settings mean? What image features is an object recognizer looking at, and how does it piece them together into the distinctive visual signatures of cars, houses, and coffee cups? Looking at the weights of individual connections won’t answer that question.

In recent years, computer scientists have begun to come up with ingenious methods for deducing the analytic strategies adopted by neural nets. But in the 1980s, the networks’ strategies were indecipherable. So around the turn of the century, neural networks were supplanted by support vector machines, an alternative approach to machine learning that’s based on some very clean and elegant mathematics.

The recent resurgence in neural networks — the deep-learning revolution — comes courtesy of the computer-game industry. The complex imagery and rapid pace of today’s video games require hardware that can keep up, and the result has been the graphics processing unit (GPU), which packs thousands of relatively simple processing cores on a single chip. It didn’t take long for researchers to realize that the architecture of a GPU is remarkably like that of a neural net.

Modern GPUs enabled the one-layer networks of the 1960s and the two- to three-layer networks of the 1980s to blossom into the 10-, 15-, even 50-layer networks of today. That’s what the “deep” in “deep learning” refers to — the depth of the network’s layers. And currently, deep learning is responsible for the best-performing systems in almost every area of artificial-intelligence research.

Under the hood

The networks’ opacity is still unsettling to theorists, but there’s headway on that front, too. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center’s research program in Theoretical Frameworks for Intelligence. Recently, Poggio and his CBMM colleagues have released a three-part theoretical study of neural networks.

The first part, which was published last month in the International Journal of Automation and Computing, addresses the range of computations that deep-learning networks can execute and when deep networks offer advantages over shallower ones. Parts two and three, which have been released as CBMM technical reports, address the problems of global optimization, or guaranteeing that a network has found the settings that best accord with its training data, and overfitting, or cases in which the network becomes so attuned to the specifics of its training data that it fails to generalize to other instances of the same categories.

There are still plenty of theoretical questions to be answered, but CBMM researchers’ work could help ensure that neural networks finally break the generational cycle that has brought them in and out of favor for seven decades.

This image from MIT illustrates a ‘modern’ neural network,

Most applications of deep learning use “convolutional” neural networks, in which the nodes of each layer are clustered, the clusters overlap, and each cluster feeds data to multiple nodes (orange and green) of the next layer. Image: Jose-Luis Olivares/MIT

h/t phys.org April 17, 2017

One final note, I wish the folks at MIT had an ‘explainer’ archive. I’m not sure how to find any more ‘explainers on MIT’s website.

Using open-source software for a 3D look at nanomaterials

A 3-D view of a hyperbranched nanoparticle with complex structure, made possible by Tomviz 1.0, a new open-source software platform developed by researchers at the University of Michigan, Cornell University and Kitware Inc. Image credit: Robert Hovden, Michigan Engineering

An April 3, 2017 news item on ScienceDaily describes this new and freely available software,

Now it’s possible for anyone to see and share 3-D nanoscale imagery with a new open-source software platform developed by researchers at the University of Michigan, Cornell University and open-source software company Kitware Inc.

Tomviz 1.0 is the first open-source tool that enables researchers to easily create 3-D images from electron tomography data, then share and manipulate those images in a single platform.

A March 31, 2017 University of Michigan news release, which originated the news item, expands on the theme,

The world of nanoscale materials—things 100 nanometers and smaller—is an important place for scientists and engineers who are designing the stuff of the future: semiconductors, metal alloys and other advanced materials.

Seeing in 3-D how nanoscale flecks of platinum arrange themselves in a car’s catalytic converter, for example, or how spiky dendrites can cause short circuits inside lithium-ion batteries, could spur advances like safer, longer-lasting batteries; lighter, more fuel efficient cars; and more powerful computers.

“3-D nanoscale imagery is useful in a variety of fields, including the auto industry, semiconductors and even geology,” said Robert Hovden, U-M assistant professor of materials science engineering and one of the creators of the program. “Now you don’t have to be a tomography expert to work with these images in a meaningful way.”

Tomviz solves a key challenge: the difficulty of interpreting data from the electron microscopes that examine nanoscale objects in 3-D. The machines shoot electron beams through nanoparticles from different angles. The beams form projections as they travel through the object, a bit like nanoscale shadow puppets.

Once the machine does its work, it’s up to researchers to piece hundreds of shadows into a single three-dimensional image. It’s as difficult as it sounds—an art as well as a science. Like staining a traditional microscope slide, researchers often add shading or color to 3-D images to highlight certain attributes.

A 3-D view of a particle used in a hydrogen fuel cell powered vehicle. The gray structure is carbon; the red and blue particles are nanoscale flecks of platinum. The image is made possible by Tomviz 1.0. Image credit: Elliot Padget, Cornell UniversityA 3-D view of a particle used in a hydrogen fuel cell powered vehicle. The gray structure is carbon; the red and blue particles are nanoscale flecks of platinum. The image is made possible by Tomviz 1.0. Image credit: Elliot Padget, Cornell UniversityTraditionally, they’ve have had to rely on a hodgepodge of proprietary software to do the heavy lifting. The work is expensive and time-consuming; so much so that even big companies like automakers struggle with it. And once a 3-D image is created, it’s often impossible for other researchers to reproduce it or to share it with others.

Tomviz dramatically simplifies the process and reduces the amount of time and computing power needed to make it happen, its designers say. It also enables researchers to readily collaborate by sharing all the steps that went into creating a given image and enabling them to make tweaks of their own.

“These images are far different from the 3-D graphics you’d see at a movie theater, which are essentially cleverly lit surfaces,” Hovden said. “Tomviz explores both the surface and the interior of a nanoscale object, with detailed information about its density and structure. In some cases, we can see individual atoms.”

Key to making Tomviz happen was getting tomography experts and software developers together to collaborate, Hovden said. Their first challenge was gaining access to a large volume of high-quality tomography. The team rallied experts at Cornell, Berkeley Lab and UCLA to contribute their data, and also created their own using U-M’s microscopy center. To turn raw data into code, Hovden’s team worked with open-source software maker Kitware.

With the release of Tomviz 1.0, Hovden is looking toward the next stages of the project, where he hopes to integrate the software directly with microscopes. He believes that U-M’s atom probe tomography facilities and expertise could help him design a version that could ultimately uncover the chemistry of all atoms in 3-D.

“We are unlocking access to see new 3D nanomaterials that will power the next generation of technology,” Hovden said. “I’m very interested in pushing the boundaries of understanding materials in 3-D.”

There is a video about Tomviz,

You can download Tomviz from here and you can find Kitware here. Happy 3D nanomaterial viewing!

Tree-on-a-chip

It’s usually organ-on-a-chip or lab-on-a-chip or human-on-a-chip; this is my first tree-on-a-chip.

Engineers have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and other plants. Courtesy: MIT

From a March 20, 2017 news item on phys.org,

Trees and other plants, from towering redwoods to diminutive daisies, are nature’s hydraulic pumps. They are constantly pulling water up from their roots to the topmost leaves, and pumping sugars produced by their leaves back down to the roots. This constant stream of nutrients is shuttled through a system of tissues called xylem and phloem, which are packed together in woody, parallel conduits.

Now engineers at MIT [Massachusetts Institute of Technology] and their collaborators have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and plants. Like its natural counterparts, the chip operates passively, requiring no moving parts or external pumps. It is able to pump water and sugars through the chip at a steady flow rate for several days. The results are published this week in Nature Plants.

A March 20, 2017 MIT news release by Jennifer Chu, which originated the news item, describes the work in more detail,

Anette “Peko” Hosoi, professor and associate department head for operations in MIT’s Department of Mechanical Engineering, says the chip’s passive pumping may be leveraged as a simple hydraulic actuator for small robots. Engineers have found it difficult and expensive to make tiny, movable parts and pumps to power complex movements in small robots. The team’s new pumping mechanism may enable robots whose motions are propelled by inexpensive, sugar-powered pumps.

“The goal of this work is cheap complexity, like one sees in nature,” Hosoi says. “It’s easy to add another leaf or xylem channel in a tree. In small robotics, everything is hard, from manufacturing, to integration, to actuation. If we could make the building blocks that enable cheap complexity, that would be super exciting. I think these [microfluidic pumps] are a step in that direction.”

Hosoi’s co-authors on the paper are lead author Jean Comtet, a former graduate student in MIT’s Department of Mechanical Engineering; Kaare Jensen of the Technical University of Denmark; and Robert Turgeon and Abraham Stroock, both of Cornell University.

A hydraulic lift

The group’s tree-inspired work grew out of a project on hydraulic robots powered by pumping fluids. Hosoi was interested in designing hydraulic robots at the small scale, that could perform actions similar to much bigger robots like Boston Dynamic’s Big Dog, a four-legged, Saint Bernard-sized robot that runs and jumps over rough terrain, powered by hydraulic actuators.

“For small systems, it’s often expensive to manufacture tiny moving pieces,” Hosoi says. “So we thought, ‘What if we could make a small-scale hydraulic system that could generate large pressures, with no moving parts?’ And then we asked, ‘Does anything do this in nature?’ It turns out that trees do.”

The general understanding among biologists has been that water, propelled by surface tension, travels up a tree’s channels of xylem, then diffuses through a semipermeable membrane and down into channels of phloem that contain sugar and other nutrients.

The more sugar there is in the phloem, the more water flows from xylem to phloem to balance out the sugar-to-water gradient, in a passive process known as osmosis. The resulting water flow flushes nutrients down to the roots. Trees and plants are thought to maintain this pumping process as more water is drawn up from their roots.

“This simple model of xylem and phloem has been well-known for decades,” Hosoi says. “From a qualitative point of view, this makes sense. But when you actually run the numbers, you realize this simple model does not allow for steady flow.”

In fact, engineers have previously attempted to design tree-inspired microfluidic pumps, fabricating parts that mimic xylem and phloem. But they found that these designs quickly stopped pumping within minutes.

It was Hosoi’s student Comtet who identified a third essential part to a tree’s pumping system: its leaves, which produce sugars through photosynthesis. Comtet’s model includes this additional source of sugars that diffuse from the leaves into a plant’s phloem, increasing the sugar-to-water gradient, which in turn maintains a constant osmotic pressure, circulating water and nutrients continuously throughout a tree.

Running on sugar

With Comtet’s hypothesis in mind, Hosoi and her team designed their tree-on-a-chip, a microfluidic pump that mimics a tree’s xylem, phloem, and most importantly, its sugar-producing leaves.

To make the chip, the researchers sandwiched together two plastic slides, through which they drilled small channels to represent xylem and phloem. They filled the xylem channel with water, and the phloem channel with water and sugar, then separated the two slides with a semipermeable material to mimic the membrane between xylem and phloem. They placed another membrane over the slide containing the phloem channel, and set a sugar cube on top to represent the additional source of sugar diffusing from a tree’s leaves into the phloem. They hooked the chip up to a tube, which fed water from a tank into the chip.

With this simple setup, the chip was able to passively pump water from the tank through the chip and out into a beaker, at a constant flow rate for several days, as opposed to previous designs that only pumped for several minutes.

“As soon as we put this sugar source in, we had it running for days at a steady state,” Hosoi says. “That’s exactly what we need. We want a device we can actually put in a robot.”

Hosoi envisions that the tree-on-a-chip pump may be built into a small robot to produce hydraulically powered motions, without requiring active pumps or parts.

“If you design your robot in a smart way, you could absolutely stick a sugar cube on it and let it go,” Hosoi says.

This research was supported, in part, by the Defense Advance Research Projects Agency [DARPA].

This research’s funding connection to DARPA reminded me that MIT has an Institute of Soldier Nanotechnologies.

Getting back to the tree-on-a-chip, here’s a link to and a citation for the paper,

Passive phloem loading and long-distance transport in a synthetic tree-on-a-chip by Jean Comtet, Kaare H. Jensen, Robert Turgeon, Abraham D. Stroock & A. E. Hosoi. Nature Plants 3, Article number: 17032 (2017)  doi:10.1038/nplants.2017.32 Published online: 20 March 2017

This paper is behind a paywall.

Textiles that clean pollution from air and water

I once read that you could tell what colour would be in style by looking at the river in Milan (Italy). It may or may not still be true in Milan but it seems that the practice of using the river for dumping the fashion industry’s wastewater is still current in at least some parts of the world according to a Nov. 10, 2016 news item on Nanowerk featuring Juan Hinestroza’s work on textiles that clear pollution,

A stark and troubling reality helped spur Juan Hinestroza to what he hopes is an important discovery and a step toward cleaner manufacturing.

Hinestroza, associate professor of fiber science and director of undergraduate studies in the College of Human Ecology [Cornell University], has been to several manufacturing facilities around the globe, and he says that there are some areas of the planet in which he could identify what color is in fashion in New York or Paris by simply looking at the color of a nearby river.

“I saw it with my own eyes; it’s very sad,” he said.

Some of these overseas facilities are dumping waste products from textile dying and other processes directly into the air and waterways, making no attempt to mitigate their product’s effect on the environment.

“There are companies that make a great effort to make things in a clean and responsible manner,” he said, “but there are others that don’t.”

Hinestroza is hopeful that a technique developed at Cornell in conjunction with former Cornell chemistry professor Will Dichtel will help industry clean up its act. The group has shown the ability to infuse cotton with a beta-cyclodextrin (BCD) polymer, which acts as a filtration device that works in both water and air.

A Nov. 10, 2016 Cornell University news release by Tom Fleischman provides more detail about the research,

Cotton fabric was functionalized by making it a participant in the polymerization process. The addition of the fiber to the reaction resulted in a unique polymer grafted to the cotton surface.

“One of the limitations of some super-absorbents is that you need to be able to put them into a substrate that can be easily manufactured,” Hinestroza said. “Fibers are perfect for that – fibers are everywhere.”

Scanning electron microscopy showed that the cotton fibers appeared unchanged after the polymerization reaction. And when tested for uptake of pollutants in water (bisphenol A) and air (styrene), the polymerized fibers showed orders of magnitude greater uptakes than that of untreated cotton fabric or commercial absorbents.

Hinestroza pointed to several positives that should make this functionalized fabric technology attractive to industry.

“We’re compatible with existing textile machinery – you wouldn’t have to do a lot of retooling,” he said. “It works on both air and water, and we proved that we can remove the compounds and reuse the fiber over and over again.”

Hinestroza said the adsorption potential of this patent-pending technique could extend to other materials, and be used for respirator masks and filtration media, explosive detection and even food packaging that would detect when the product has gone bad.

And, of course, he hopes it can play a role in a cleaner, more environmentally responsible industrial practices.

“There’s a lot of pollution generation in the manufacture of textiles,” he said. “It’s just fair that we should maybe use the same textiles to clean the mess that we make.”

Here’s a link to and a citation for the paper,

Cotton Fabric Functionalized with a β-Cyclodextrin Polymer Captures Organic Pollutants from Contaminated Air and Water by Diego M. Alzate-Sánchez†, Brian J. Smith, Alaaeddin Alsbaiee, Juan P. Hinestroza, and William R. Dichtel. Chem. Mater., Article ASAP DOI: 10.1021/acs.chemmater.6b03624 Publication Date (Web): October 24, 2016

Copyright © 2016 American Chemical Society

This paper is open access.

One comment, I’m not sure how this solution will benefit the rivers unless they’re thinking that textile manufacturers will filter their waste water through this new fabric.

There is another researcher working on creating textiles that remove air pollution, Tony Ryan at the University of Sheffield (UK). My latest piece about his (and Helen Storey’s) work is a July 28, 2014 posting featuring a detergent that deposits onto the fabric nanoparticles that will clear air pollution. At the time, China was showing serious interest in the product.

The dangers of metaphors when applied to science

Metaphors can be powerful in both good ways and bad. I once read that there was a ‘lighthouse’ metaphor used to explain a scientific concept to high school students which later caused problems for them when they were studying the biological sciences as university students.  It seems there’s research now to back up the assertion about metaphors and their powers. From an Oct. 7, 2016 news item on phys.org,

Whether ideas are “like a light bulb” or come forth as “nurtured seeds,” how we describe discovery shapes people’s perceptions of both inventions and inventors. Notably, Kristen Elmore (Bronfenbrenner Center for Translational Research at Cornell University) and Myra Luna-Lucero (Teachers College, Columbia University) have shown that discovery metaphors influence our perceptions of the quality of an idea and of the ability of the idea’s creator. The research appears in the journal Social Psychological and Personality Science.

While the metaphor that ideas appear “like light bulbs” is popular and appealing, new research shows that discovery metaphors influence our understanding of the scientific process and perceptions of the ability of inventors based on their gender. [downloaded from http://www.spsp.org/news-center/press-release/metaphors-bias-perception]

While the metaphor that ideas appear “like light bulbs” is popular and appealing, new research shows that discovery metaphors influence our understanding of the scientific process and perceptions of the ability of inventors based on their gender. [downloaded from http://www.spsp.org/news-center/press-release/metaphors-bias-perception]

An Oct. 7, 2016  Society for Personality and Social Psychology news release (also on EurekAlert), which originated the news item, provides more insight into the work,

While those involved in research know there are many trials and errors and years of work before something is understood, discovered or invented, our use of words for inspiration may have an unintended and underappreciated effect of portraying good ideas as a sudden and exceptional occurrence.

In a series of experiments, Elmore and Luna-Lucero tested how people responded to ideas that were described as being “like a light bulb,” “nurtured like a seed,” or a neutral description. 

According the authors, the “light bulb metaphor implies that ‘brilliant’ ideas result from sudden and spontaneous inspiration, bestowed upon a chosen few (geniuses) while the seed metaphor implies that ideas are nurtured over time, ‘cultivated’ by anyone willing to invest effort.”

The first study looked at how people reacted to a description of Alan Turing’s invention of a precursor to the modern computer. It turns out light bulbs are more remarkable than seeds.

“We found that an idea was seen as more exceptional when described as appearing like a light bulb rather than nurtured like a seed,” said Elmore.

But this pattern changed when they used these metaphors to describe a female inventor’s ideas. When using the “like a light bulb” and “nurtured seed” metaphors, the researchers found “women were judged as better idea creators than men when ideas were described as nurtured over time like seeds.”

The results suggest gender stereotypes play a role in how people perceived the inventors.

In the third study, the researchers presented participants with descriptions of the work of either a female (Hedy Lamarr) or a male (George Antheil) inventor, who together created the idea for spread-spectrum technology (a precursor to modern wireless communications). Indeed, the seed metaphor “increased perceptions that a female inventor was a genius, while the light bulb metaphor was more consistent with stereotypical views of male genius,” stated Elmore.

Elmore plans to expand upon their research on metaphors by examining the interactions of teachers and students in real world classroom settings.

“The ways that teachers and students talk about ideas may impact students’ beliefs about how good ideas are created and who is likely to have them,” said Elmore. “Having good ideas is relevant across subjects—whether students are creating a hypothesis in science or generating a thesis for their English paper—and language that stresses the role of effort rather than inspiration in creating ideas may have real benefits for students’ motivation.”

Here’s a link to and a citation for the paper,

Light Bulbs or Seeds? How Metaphors for Ideas Influence Judgments About Genius by Kristen C. Elmore and Myra Luna-Lucero. Social Psychological and Personality Science doi: 10.1177/1948550616667611 Published online before print October 7, 2016

This paper is behind a paywall.

While Elmore and Luna-Lucero are focused on a nuanced analysis of specific metaphors, Richard Holmes’s book, ‘The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science’, notes that the ‘Eureka’  (light bulb) moment for scientific discovery and the notion of a ‘single great man’ (a singular genius) as the discoverer has its roots in romantic (Shelley, Keats, etc.) poetry.

arXiv which helped kickoff the open access movement contemplates its future

arXiv is hosted by Cornell University and lodges over a million scientific papers that are open to access by anyone. Here’s more from a July 22, 2016 news item on phys.org,

As the arXiv repository of scientific papers celebrates its 25th year as one of the scientific community’s most important means of communication, the site’s leadership is looking ahead to ensure it remains indispensable, robust and financially sustainable.

A July 21, 2016 Cornell University news release by Bill Steele, which originated the news item, provides more information about future plans and a brief history of the repository (Note: Links have been removed),

Changes and improvements are in store, many in response to suggestions received in a survey of nearly 37,000 users whose primary requests were for a more robust search engine and better facilities to share supplementary material, such as slides or code, that often accompanies scientific papers.

But even more important is to upgrade the underlying architecture of the system, much of it based on “old code,” said Oya Rieger, associate university librarian for digital scholarship and preservation services, who serves as arXiv’s program director. “We have to create a work plan to ensure that arXiv will serve for another 25 years,” she said. That will require recruiting additional programmers and finding additional sources of funding, she added.

The improvements will not change the site’s essential format or its core mission of free and open dissemination of the latest scientific research, Rieger said.

arXiv was created in 1991 by Paul Ginsparg, professor of physics and information science, when he was working at Los Alamos National Laboratory. It was then common practice for researchers to circulate “pre-prints” of their papers so that colleagues could have the advantage of knowing about their research in advance of publication in scientific journals. Ginsparg launched a service (originally running from a computer under his desk) to make the papers instantly available online.

Ginsparg brought the arXiv with him from Los Alamos when he joined the Cornell faculty in 2001. Since then, it has been managed by Cornell University Library, with Ginsparg as a member of its scientific advisory board.

In 2015, arXiv celebrated its millionth submission and saw 139 million downloads in that year alone.

Nearly 95 percent of respondents to the survey said they were satisfied with arXiv, many saying that rapid access to research results had made a difference in their careers, and applauding it as an advance in open access.

“We were amazed and heartened by the outpouring of responses representing users from a variety of countries, age groups and career stages. Their insight will help us as we refine a compelling and coherent vision for arXiv’s future,” Rieger said. “We’re continuing to explore current and emerging user needs and priorities. We hope to secure funding to revamp the service’s infrastructure and ensure that it will continue to serve as an important scientific venue for facilitating rapid dissemination of papers, which is arXiv’s core goal.”

Though some users suggested new or additional features, a majority of respondents emphasized that the clean, unencumbered nature of the site makes its use easy and efficient. “I sincerely wish academic journals could try to emulate the cleanness, convenience and user-friendly nature of the arXiv, and I hope the future of academic publishing looks more like what we’ve been able to enjoy in the arXiv,” one user wrote.

arXiv is supported by a global collective of nearly 200 libraries in 24 countries, and an ongoing grant from the Simons Foundation. In 2012, the site adopted a new funding model, in which it is collaboratively governed and supported by the research communities and institutions that benefit from it most directly.

Having a bee in my bonnet about overproduced websites (MIT [Massachusetts Institute of Technology], I’m looking at you), I can’t help but applaud this user and, of course, arXiv, “I sincerely wish academic journals could try to emulate the cleanness, convenience and user-friendly nature of the arXiv, and I hope the future of academic publishing looks more like what we’ve been able to enjoy in the arXiv, …”

For anyone interested in arXiv plans, there’s the arXiv Review Strategy here on Cornell University’s Confluence website.

Cornell University researchers breach blood-brain barrier

There are other teams working on ways to breach the blood-brain barrier (my March 26, 2015 post highlights work from a team at the University of Montréal) but this team from  Cornell is working with a drug that has already been approved by the US Food and Drug Administration (FDA) according to an April 8, 2016 news item on ScienceDaily,

Cornell researchers have discovered a way to penetrate the blood brain barrier (BBB) that may soon permit delivery of drugs directly into the brain to treat disorders such as Alzheimer’s disease and chemotherapy-resistant cancers.

The BBB is a layer of endothelial cells that selectively allow entry of molecules needed for brain function, such as amino acids, oxygen, glucose and water, while keeping others out.

Cornell researchers report that an FDA-approved drug called Lexiscan activates receptors — called adenosine receptors — that are expressed on these BBB cells.

An April 4, 2016 Cornell University news release by Krishna Ramanujan, which originated the news item, expands on the theme,

“We can open the BBB for a brief window of time, long enough to deliver therapies to the brain, but not too long so as to harm the brain. We hope in the future, this will be used to treat many types of neurological disorders,” said Margaret Bynoe, associate professor in the Department of Microbiology and Immunology in Cornell’s College of Veterinary Medicine. …

The researchers were able to deliver chemotherapy drugs into the brains of mice, as well as large molecules, like an antibody that binds to Alzheimer’s disease plaques, according to the paper.

To test whether this drug delivery system has application to the human BBB, the lab engineered a BBB model using human primary brain endothelial cells. They observed that Lexiscan opened the engineered BBB in a manner similar to its actions in mice.

Bynoe and Kim discovered that a protein called P-glycoprotein is highly expressed on brain endothelial cells and blocks the entry of most drugs delivered to the brain. Lexiscan acts on one of the adenosine receptors expressed on BBB endothelial cells specifically activating them. They showed that Lexiscan down-regulates P-glycoprotein expression and function on the BBB endothelial cells. It acts like a switch that can be turned on and off in a time dependent manner, which provides a measure of safety for the patient.

“We demonstrated that down-modulation of P-glycoprotein function coincides exquisitely with chemotherapeutic drug accumulation” in the brains of mice and across an engineered BBB using human endothelial cells, Bynoe said. “The amount of chemotherapeutic drugs that accumulated in the brain was significant.”

In addition to P-glycoprotein’s role in inhibiting foreign substances from penetrating the BBB, the protein is also expressed by many different types of cancers and makes these cancers resistant to chemotherapy.

“This finding has significant implications beyond modulation of the BBB,” Bynoe said. “It suggests that in the future, we may be able to modulate adenosine receptors to regulate P-glycoprotein in the treatment of cancer cells resistant to chemotherapy.”

Because Lexiscan is an FDA-approved drug, ”the potential for a breakthrough in drug delivery systems for diseases such as Alzheimer’s disease, Parkinson’s disease, autism, brain tumors and chemotherapy-resistant cancers is not far off,” Bynoe said.

Another advantage is that these molecules (adenosine receptors  and P-glycoprotein are naturally expressed in mammals. “We don’t have to knock out a gene or insert one for a therapy to work,” Bynoe said.

The study was funded by the National Institutes of Health and the Kwanjung Educational Foundation.

Here’s a link to and a citation for the paper,

A2A adenosine receptor modulates drug efflux transporter P-glycoprotein at the blood-brain barrier by Do-Geun Kim and Margaret S. Bynoe. J Clin Invest. doi:10.1172/JCI76207 First published April 4, 2016

Copyright © 2016, The American Society for Clinical Investigation.

This paper appears to be open access.