Tag Archives: Massachusetts Institute of Technology

Carbon nanotubes as sensors in the body

Rachel Ehrenberg has written an Aug. 21, 2015 news item about the latest and greatest carbon nanotube-based biomedical sensors for the journal Nature,

The future of medical sensors may be going down the tubes. Chemists are developing tiny devices made from carbon nanotubes wrapped with polymers to detect biologically important compounds such as insulin, nitric oxide and the blood-clotting protein fibrinogen. The hope is that these sensors could simplify and automate diagnostic tests.

Preliminary experiments in mice, reported by scientists at a meeting of the American Chemical Society in Boston, Massachusetts, this week [Aug. 16 – 20, 2015], suggest that the devices are safe to introduce into the bloodstream or implant under the skin. Researchers also presented data showing that the nanotube–polymer complexes could measure levels of large molecules, a feat that has been difficult for existing technologies.

Ehrenberg focuses on one laboratory in particular (Note: Links have been removed),

“Anything the body makes, it is meant to degrade,” says chemical engineer Michael Strano, whose lab at the Massachusetts Institute of Technology (MIT) in Cambridge is behind much of the latest work1. “Our vision is to make a sensing platform that can monitor a whole range of molecules, and do it in the long term.”

To design one sensor, MIT  researchers coated nanotubes with a mix of polymers and nucleotides and screened for configurations that would bind to the protein fibrinogen. This large molecule is important for building blood clots; its concentration can indicate bleeding disorders, liver disease or impending cardiovascular trouble. The team recently hit on a material that worked — a first for such a large molecule, according to MIT nanotechnology specialist Gili Bisker. Bisker said at the chemistry meeting that the fibrinogen-detecting nanotubes could be used to measure levels of the protein in blood samples, or implanted in body tissue to detect changing fibrinogen levels that might indicate a clot.

The MIT team has also developed2 a sensor that can be inserted beneath the skin to monitor glucose or insulin levels in real time, Bisker reported. The team imagines putting a small patch that contains a wireless device on the skin just above the embedded sensor. The patch would shine light on the sensor and measure its fluorescence, then transmit that data to a mobile phone for real-time monitoring.

Another version of the sensor, developed3 at MIT by biomedical engineer Nicole Iverson and colleagues, detects nitric oxide. This signalling molecule typically indicates inflammation and is associated with many cancer cells. When embedded in a hydrogel matrix, the sensor kept working in mice for more than 400 days and caused no local inflammation, MIT chemical engineer Michael Lee reported. The nitric oxide sensors also performed well when injected into the bloodstreams of mice, successfully passing through small capillaries in the lungs, which are an area of concern for nanotube toxicity. …

There’s at least one corporate laboratory (Google X), working on biosensors although their focus is a little different. From a Jan. 9, 2015 article by Brian Womack and Anna Edney for BloombergBusiness,

Google Inc. sent employees with ties to its secretive X research group to meet with U.S. regulators who oversee medical devices, raising the possibility of a new product that may involve biosensors from the unit that developed computerized glasses.

The meeting included at least four Google workers, some of whom have connections with Google X — and have done research on sensors, including contact lenses that help wearers monitor their biological data. Google staff met with those at the Food and Drug Administration who regulate eye devices and diagnostics for heart conditions, according to the agency’s public calendar. [emphasis mine]

This approach from Google is considered noninvasive,

“There is actually one interface on the surface of the body that can literally provide us with a window of what happens inside, and that’s the surface of the eye,” Parviz [Babak Parviz, … was involved in the Google Glass project and has talked about putting displays on contact lenses, including lenses that monitor wearer’s health]  said in a video posted on YouTube. “It’s a very interesting chemical interface.”

Of course, the assumption is that all this monitoring is going to result in  healthier people but I can’t help thinking about an old saying ‘a little knowledge can be a dangerous thing’. For example, we lived in a world where bacteria roamed free and then we learned how to make them visible, determined they were disease-causing, and began campaigns for killing them off. Now, it turns out that at least some bacteria are good for us and, moreover, we’ve created other, more dangerous bacteria that are drug-resistant. Based on the bacteria example, is it possible that with these biosensors we will observe new phenomena and make similar mistakes?

Scaling graphene production up to industrial strength

If graphene is going to be a ubiquitous material in the future, production methods need to change. An Aug. 7, 2015 news item on Nanowerk announces a new technique to achieve that goal,

Producing graphene in bulk is critical when it comes to the industrial exploitation of this exceptional two-dimensional material. To that end, [European Commission] Graphene Flagship researchers have developed a novel variant on the chemical vapour deposition process which yields high quality material in a scalable manner. This advance should significantly narrow the performance gap between synthetic and natural graphene.

An Aug. 7, 2015 European Commission Graphene Flagship press release by Francis Sedgemore, which originated the news item, describes the problem,

Media-friendly Nobel laureates peeling layers of graphene from bulk graphite with sticky tape may capture the public imagination, but as a manufacturing process the technique is somewhat lacking. Mechanical exfoliation may give us pristine graphene, but industry requires scalable and cost-effective production processes with much higher yields.

On to the new method (from the press release),

Flagship-affiliated physicists from RWTH Aachen University and Forschungszentrum Jülich have together with colleagues in Japan devised a method for peeling graphene flakes from a CVD substrate with the help of intermolecular forces. …

Key to the process is the strong van der Waals interaction that exists between graphene and hexagonal boron nitride, another 2d material within which it is encapsulated. The van der Waals force is the attractive sum of short-range electric dipole interactions between uncharged molecules.

Thanks to strong van der Waals interactions between graphene and boron nitride, CVD graphene can be separated from the copper and transferred to an arbitrary substrate. The process allows for re-use of the catalyst copper foil in further growth cycles, and minimises contamination of the graphene due to processing.

Raman spectroscopy and transport measurements on the graphene/boron nitride heterostructures reveals high electron mobilities comparable with those observed in similar assemblies based on exfoliated graphene. Furthermore – and this comes as something of a surprise to the researchers – no noticeable performance changes are detected between devices developed in the first and subsequent growth cycles. This confirms the copper as a recyclable resource in the graphene fabrication process.

“Chemical vapour deposition is a highly scalable and cost-efficient technology,” says Christoph Stampfer, head of the 2nd Institute of Physics A in Aachen, and co-author of the technical article. “Until now, graphene synthesised this way has been significantly lower in quality than that obtained with the scotch-tape method, especially when it comes to the material’s electronic properties. But no longer. We demonstrate a novel fabrication process based on CVD that yields ultra-high quality synthetic graphene samples. The process is in principle suitable for industrial-scale production, and narrows the gap between graphene research and its technological applications.”

With their dry-transfer process, Banszerus and his colleagues have shown that the electronic properties of CVD-grown graphene can in principle match those of ultrahigh-mobility exfoliated graphene. The key is to transfer CVD graphene from its growth substrate in such a way that chemical contamination is avoided. The high mobility of pristine graphene is thus preserved, and the approach allows for the substrate material to be recycled without degradation.

Here’s a link to and citation for the paper,

Ultrahigh-mobility graphene devices from chemical vapor deposition on reusable copper by Luca Banszerus, Michael Schmitz, Stephan Engels, Jan Dauber, Martin Oellers, Federica Haupt, Kenji Watanabe, Takashi Taniguchi, Bernd Beschoten, and Christoph Stampfer. Science Advances  31 Jul 2015: Vol. 1, no. 6, e1500222 DOI: 10.1126/sciadv.1500222

This article appears to be open access.

For those interested in finding out more about chemical vapour deposition (CVD), David Chandler has written a June 19, 2015 article for the Massachusetts Institute of Technology (MIT) titled:  Explained: chemical vapor deposition (Technique enables production of pure, uniform coatings of metals or polymers, even on contoured surfaces.)

Nanoscale imaging of a mouse brain

Researchers have developed a new brain imaging tool they would like to use as a founding element for a national brain observatory. From a July 30, 2015 news item on Azonano,

A new imaging tool developed by Boston scientists could do for the brain what the telescope did for space exploration.

In the first demonstration of how the technology works, published July 30 in the journal Cell, the researchers look inside the brain of an adult mouse at a scale previously unachievable, generating images at a nanoscale resolution. The inventors’ long-term goal is to make the resource available to the scientific community in the form of a national brain observatory.

A July 30, 2015 Cell Press news release on EurekAlert, which originated the news item, expands on the theme,

“I’m a strong believer in bottom up-science, which is a way of saying that I would prefer to generate a hypothesis from the data and test it,” says senior study author Jeff Lichtman, of Harvard University. “For people who are imagers, being able to see all of these details is wonderful and we’re getting an opportunity to peer into something that has remained somewhat intractable for so long. It’s about time we did this, and it is what people should be doing about things we don’t understand.”

The researchers have begun the process of mining their imaging data by looking first at an area of the brain that receives sensory information from mouse whiskers, which help the animals orient themselves and are even more sensitive than human fingertips. The scientists used a program called VAST, developed by co-author Daniel Berger of Harvard and the Massachusetts Institute of Technology, to assign different colors and piece apart each individual “object” (e.g., neuron, glial cell, blood vessel cell, etc.).

“The complexity of the brain is much more than what we had ever imagined,” says study first author Narayanan “Bobby” Kasthuri, of the Boston University School of Medicine. “We had this clean idea of how there’s a really nice order to how neurons connect with each other, but if you actually look at the material it’s not like that. The connections are so messy that it’s hard to imagine a plan to it, but we checked and there’s clearly a pattern that cannot be explained by randomness.”

The researchers see great potential in the tool’s ability to answer questions about what a neurological disorder actually looks like in the brain, as well as what makes the human brain different from other animals and different between individuals. Who we become is very much a product of the connections our neurons make in response to various life experiences. To be able to compare the physical neuron-to-neuron connections in an infant, a mathematical genius, and someone with schizophrenia would be a leap in our understanding of how our brains shape who we are (or vice versa).

The cost and data storage demands for this type of research are still high, but the researchers expect expenses to drop over time (as has been the case with genome sequencing). To facilitate data sharing, the scientists are now partnering with Argonne National Laboratory with the hopes of creating a national brain laboratory that neuroscientists around the world can access within the next few years.

“It’s bittersweet that there are many scientists who think this is a total waste of time as well as a big investment in money and effort that could be better spent answering questions that are more proximal,” Lichtman says. “As long as data is showing you things that are unexpected, then you’re definitely doing the right thing. And we are certainly far from being out of the surprise element. There’s never a time when we look at this data that we don’t see something that we’ve never seen before.”

Here’s a link to and a citation for the paper,

Saturated Reconstruction of a Volume of Neocortex by Narayanan Kasthuri, Kenneth Jeffrey Hayworth, Daniel Raimund Berger, Richard Lee Schalek, José Angel Conchello, Seymour Knowles-Barley, Dongil Lee, Amelio Vázquez-Reina, Verena Kaynig, Thouis Raymond Jones, Mike Roberts, Josh Lyskowski Morgan, Juan Carlos Tapia, H. Sebastian Seung, William Gray Roncal, Joshua Tzvi Vogelstein, Randal Burns, Daniel Lewis Sussman, Carey Eldin Priebe, Hanspeter Pfister, Jeff William Lichtman. Cell Volume 162, Issue 3, p648–661, 30 July 2015 DOI: http://dx.doi.org/10.1016/j.cell.2015.06.054

This appears to be an open access paper.

IBM and its working 7nm test chip

I wrote abut IBM and its plans for a 7nm computer chip last year in a July 11, 2014 posting, which featured IBM and mention of HP Labs and other company’s plans for shrinking their computer chips. Almost one year later, IBM has announced, in a July 9, 2015 IBM news release on PRnewswire.com the accomplishment of a working 7nm test chip,

An alliance led by IBM Research (NYSE: IBM) today announced that it has produced the semiconductor industry’s first 7nm (nanometer) node test chips with functioning transistors.  The breakthrough, accomplished in partnership with GLOBALFOUNDRIES and Samsung at SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering (SUNY Poly CNSE), could result in the ability to place more than 20 billion tiny switches — transistors — on the fingernail-sized chips that power everything from smartphones to spacecraft.

To achieve the higher performance, lower power and scaling benefits promised by 7nm technology, researchers had to bypass conventional semiconductor manufacturing approaches. Among the novel processes and techniques pioneered by the IBM Research alliance were a number of industry-first innovations, most notably Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels.

Industry experts consider 7nm technology crucial to meeting the anticipated demands of future cloud computing and Big Data systems, cognitive computing, mobile products and other emerging technologies. Part of IBM’s $3 billion, five-year investment in chip R&D (announced in 2014), this accomplishment was made possible through a unique public-private partnership with New York State and joint development alliance with GLOBALFOUNDRIES, Samsung and equipment suppliers. The team is based at SUNY Poly’s NanoTech Complex in Albany [New York state].

“For business and society to get the most out of tomorrow’s computers and devices, scaling to 7nm and beyond is essential,” said Arvind Krishna, senior vice president and director of IBM Research. “That’s why IBM has remained committed to an aggressive basic research agenda that continually pushes the limits of semiconductor technology. Working with our partners, this milestone builds on decades of research that has set the pace for the microelectronics industry, and positions us to advance our leadership for years to come.”

Microprocessors utilizing 22nm and 14nm technology power today’s servers, cloud data centers and mobile devices, and 10nm technology is well on the way to becoming a mature technology. The IBM Research-led alliance achieved close to 50 percent area scaling improvements over today’s most advanced technology, introduced SiGe channel material for transistor performance enhancement at 7nm node geometries, process innovations to stack them below 30nm pitch and full integration of EUV lithography at multiple levels. These techniques and scaling could result in at least a 50 percent power/performance improvement for next generation mainframe and POWER systems that will power the Big Data, cloud and mobile era.

“Governor Andrew Cuomo’s trailblazing public-private partnership model is catalyzing historic innovation and advancement. Today’s [July 8, 2015] announcement is just one example of our collaboration with IBM, which furthers New York State’s global leadership in developing next generation technologies,” said Dr. Michael Liehr, SUNY Poly Executive Vice President of Innovation and Technology and Vice President of Research.  “Enabling the first 7nm node transistors is a significant milestone for the entire semiconductor industry as we continue to push beyond the limitations of our current capabilities.”

“Today’s announcement marks the latest achievement in our long history of collaboration to accelerate development of next-generation technology,” said Gary Patton, CTO and Head of Worldwide R&D at GLOBALFOUNDRIES. “Through this joint collaborative program based at the Albany NanoTech Complex, we are able to maintain our focus on technology leadership for our clients and partners by helping to address the development challenges central to producing a smaller, faster, more cost efficient generation of semiconductors.”

The 7nm node milestone continues IBM’s legacy of historic contributions to silicon and semiconductor innovation. They include the invention or first implementation of the single cell DRAM, the Dennard Scaling Laws, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator, strained engineering, multi core microprocessors, immersion lithography, high speed SiGe, High-k gate dielectrics, embedded DRAM, 3D chip stacking and Air gap insulators.

In 2014, they were talking about carbon nanotubes with regard to the 7nm chip, this shift to silicon germanium is interesting.

Sebastian Anthony in a July 9, 2015 article for Ars Technica offers some intriguing insight into the accomplishment and the technology (Note: A link has been removed),

… While it should be stressed that commercial 7nm chips remain at least two years away, this test chip from IBM and its partners is extremely significant for three reasons: it’s a working sub-10nm chip (this is pretty significant in itself); it’s the first commercially viable sub-10nm FinFET logic chip that uses silicon-germanium as the channel material; and it appears to be the first commercially viable design produced with extreme ultraviolet (EUV) lithography.

Technologically, SiGe and EUV are both very significant. SiGe has higher electron mobility than pure silicon, which makes it better suited for smaller transistors. The gap between two silicon nuclei is about 0.5nm; as the gate width gets ever smaller (about 7nm in this case), the channel becomes so small that the handful of silicon atoms can’t carry enough current. By mixing some germanium into the channel, electron mobility increases, and adequate current can flow. Silicon generally runs into problems at sub-10nm nodes, and we can expect Intel and TSMC to follow a similar path to IBM, GlobalFoundries, and Samsung (aka the Common Platform alliance).

EUV lithography is an more interesting innovation. Basically, as chip features get smaller, you need a narrower beam of light to etch those features accurately, or you need to use multiple patterning (which we won’t go into here). The current state of the art for lithography is a 193nm ArF (argon fluoride) laser; that is, the wavelength is 193nm wide. Complex optics and multiple painstaking steps are required to etch 14nm features using a 193nm light source. EUV has a wavelength of just 13.5nm, which will handily take us down into the sub-10nm realm, but so far it has proven very difficult and expensive to deploy commercially (it has been just around the corner for quite a few years now).

If you’re interested in the nuances, I recommend reading Anthony’s article in its entirety.

One final comment, there was no discussion of electrodes or other metallic components associated with computer chips. The metallic components are a topic of some interest to me (anyway), given some research published by scientists at the Massachusetts Institute of Technology (MIT) last year. From my Oct. 14, 2014 posting,

Research from the Massachusetts Institute of Technology (MIT) has revealed a new property of metal nanoparticles, in this case, silver. From an Oct. 12, 2014 news item on ScienceDaily,

A surprising phenomenon has been found in metal nanoparticles: They appear, from the outside, to be liquid droplets, wobbling and readily changing shape, while their interiors retain a perfectly stable crystal configuration.

The research team behind the finding, led by MIT professor Ju Li, says the work could have important implications for the design of components in nanotechnology, such as metal contacts for molecular electronic circuits. [my emphasis added]

This discovery and others regarding materials and phase changes at ever diminishing sizes hint that a computer with a functioning 7nm chip might be a bit further off than IBM is suggesting.

Yarns of niobium nanowire for small electronic device boost at the University of British Columbia (Canada) and Massachusetts Institute of Technology (US)

It turns out that this research concerning supercapacitors is a collaboration between the University of British Columbia (Canada) and the Massachusetts Institute of Technology (MIT). From a July 7, 2015 news item by Stuart Milne for Azonano,

A team of researchers from MIT and University of British Columbia has discovered an innovative method to deliver short bursts of high power required by wearable electronic devices.

Such devices are used for monitoring health and fitness and as such are rapidly growing in the consumer electronics industry. However, a major drawback of these devices is that they are integrated with small batteries, which fail to deliver sufficient amount of power required for data transmission.

According to the research team, one way to resolve this issue is to develop supercapacitors, which are capable of storing and releasing short bursts of electrical power required to transmit data from smartphones, computers, heart-rate monitors, and other wearable devices. supercapacitors can also prove useful for other applications where short bursts of high power is required, for instance autonomous microrobots.

A July 7, 2015 MIT news release provides more detail about the research,

The new approach uses yarns, made from nanowires of the element niobium, as the electrodes in tiny supercapacitors (which are essentially pairs of electrically conducting fibers with an insulator between). The concept is described in a paper in the journal ACS Applied Materials and Interfaces by MIT professor of mechanical engineering Ian W. Hunter, doctoral student Seyed M. Mirvakili, and three others at the University of British Columbia.

Nanotechnology researchers have been working to increase the performance of supercapacitors for the past decade. Among nanomaterials, carbon-based nanoparticles — such as carbon nanotubes and graphene — have shown promising results, but they suffer from relatively low electrical conductivity, Mirvakili says.

In this new work, he and his colleagues have shown that desirable characteristics for such devices, such as high power density, are not unique to carbon-based nanoparticles, and that niobium nanowire yarn is a promising an alternative.

“Imagine you’ve got some kind of wearable health-monitoring system,” Hunter says, “and it needs to broadcast data, for example using Wi-Fi, over a long distance.” At the moment, the coin-sized batteries used in many small electronic devices have very limited ability to deliver a lot of power at once, which is what such data transmissions need.

“Long-distance Wi-Fi requires a fair amount of power,” says Hunter, the George N. Hatsopoulos Professor in Thermodynamics in MIT’s Department of Mechanical Engineering, “but it may not be needed for very long.” Small batteries are generally poorly suited for such power needs, he adds.

“We know it’s a problem experienced by a number of companies in the health-monitoring or exercise-monitoring space. So an alternative is to go to a combination of a battery and a capacitor,” Hunter says: the battery for long-term, low-power functions, and the capacitor for short bursts of high power. Such a combination should be able to either increase the range of the device, or — perhaps more important in the marketplace — to significantly reduce size requirements.

The new nanowire-based supercapacitor exceeds the performance of existing batteries, while occupying a very small volume. “If you’ve got an Apple Watch and I shave 30 percent off the mass, you may not even notice,” Hunter says. “But if you reduce the volume by 30 percent, that would be a big deal,” he says: Consumers are very sensitive to the size of wearable devices.

The innovation is especially significant for small devices, Hunter says, because other energy-storage technologies — such as fuel cells, batteries, and flywheels — tend to be less efficient, or simply too complex to be practical when reduced to very small sizes. “We are in a sweet spot,” he says, with a technology that can deliver big bursts of power from a very small device.

Ideally, Hunter says, it would be desirable to have a high volumetric power density (the amount of power stored in a given volume) and high volumetric energy density (the amount of energy in a given volume). “Nobody’s figured out how to do that,” he says. However, with the new device, “We have fairly high volumetric power density, medium energy density, and a low cost,” a combination that could be well suited for many applications.

Niobium is a fairly abundant and widely used material, Mirvakili says, so the whole system should be inexpensive and easy to produce. “The fabrication cost is cheap,” he says. Other groups have made similar supercapacitors using carbon nanotubes or other materials, but the niobium yarns are stronger and 100 times more conductive. Overall, niobium-based supercapacitors can store up to five times as much power in a given volume as carbon nanotube versions.

Niobium also has a very high melting point — nearly 2,500 degrees Celsius — so devices made from these nanowires could potentially be suitable for use in high-temperature applications.

In addition, the material is highly flexible and could be woven into fabrics, enabling wearable forms; individual niobium nanowires are just 140 nanometers in diameter — 140 billionths of a meter across, or about one-thousandth the width of a human hair.

So far, the material has been produced only in lab-scale devices. The next step, already under way, is to figure out how to design a practical, easily manufactured version, the researchers say.

“The work is very significant in the development of smart fabrics and future wearable technologies,” says Geoff Spinks, a professor of engineering at the University of Wollongong, in Australia, who was not associated with this research. This paper, he adds, “convincingly demonstrates the impressive performance of niobium-based fiber supercapacitors.”

Here’s a link to and a citation for the paper,

High-Performance Supercapacitors from Niobium Nanowire Yarns by Seyed M. Mirvakili, Mehr Negar Mirvakili, Peter Englezos, John D. W. Madden, and Ian W. Hunter. ACS Appl. Mater. Interfaces, 2015, 7 (25), pp 13882–13888 DOI: 10.1021/acsami.5b02327 Publication Date (Web): June 12, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

LiquiGlide, a nanotechnology-enabled coating for food packaging and oil and gas pipelines

Getting condiments out of their bottles should be a lot easier in several European countries in the near future. A June 30, 2015 news item on Nanowerk describes the technology and the business deal (Note: A link has been removed),

The days of wasting condiments — and other products — that stick stubbornly to the sides of their bottles may be gone, thanks to MIT [Massachusetts Institute of Technology] spinout LiquiGlide, which has licensed its nonstick coating to a major consumer-goods company.

Developed in 2009 by MIT’s Kripa Varanasi and David Smith, LiquiGlide is a liquid-impregnated coating that acts as a slippery barrier between a surface and a viscous liquid. Applied inside a condiment bottle, for instance, the coating clings permanently to its sides, while allowing the condiment to glide off completely, with no residue.

In 2012, amidst a flurry of media attention following LiquiGlide’s entry in MIT’s $100K Entrepreneurship Competition, Smith and Varanasi founded the startup — with help from the Institute — to commercialize the coating.

Today [June 30, 2015], Norwegian consumer-goods producer Orkla has signed a licensing agreement to use the LiquiGlide’s coating for mayonnaise products sold in Germany, Scandinavia, and several other European nations. This comes on the heels of another licensing deal, with Elmer’s [Elmer’s Glue & Adhesives], announced in March [2015].

A June 30, 2015 MIT news release, which originated the news item, provides more details about the researcher/entrepreneurs’ plans,

But this is only the beginning, says Varanasi, an associate professor of mechanical engineering who is now on LiquiGlide’s board of directors and chief science advisor. The startup, which just entered the consumer-goods market, is courting deals with numerous producers of foods, beauty supplies, and household products. “Our coatings can work with a whole range of products, because we can tailor each coating to meet the specific requirements of each application,” Varanasi says.

Apart from providing savings and convenience, LiquiGlide aims to reduce the surprising amount of wasted products — especially food — that stick to container sides and get tossed. For instance, in 2009 Consumer Reports found that up to 15 percent of bottled condiments are ultimately thrown away. Keeping bottles clean, Varanasi adds, could also drastically cut the use of water and energy, as well as the costs associated with rinsing bottles before recycling. “It has huge potential in terms of critical sustainability,” he says.

Varanasi says LiquiGlide aims next to tackle buildup in oil and gas pipelines, which can cause corrosion and clogs that reduce flow. [emphasis mine] Future uses, he adds, could include coatings for medical devices such as catheters, deicing roofs and airplane wings, and improving manufacturing and process efficiency. “Interfaces are ubiquitous,” he says. “We want to be everywhere.”

The news release goes on to describe the research process in more detail and offers a plug for MIT’s innovation efforts,

LiquiGlide was originally developed while Smith worked on his graduate research in Varanasi’s research group. Smith and Varanasi were interested in preventing ice buildup on airplane surfaces and methane hydrate buildup in oil and gas pipelines.

Some initial work was on superhydrophobic surfaces, which trap pockets of air and naturally repel water. But both researchers found that these surfaces don’t, in fact, shed every bit of liquid. During phase transitions — when vapor turns to liquid, for instance — water droplets condense within microscopic gaps on surfaces, and steadily accumulate. This leads to loss of anti-icing properties of the surface. “Something that is nonwetting to macroscopic drops does not remain nonwetting for microscopic drops,” Varanasi says.

Inspired by the work of researcher David Quéré, of ESPCI in Paris, on slippery “hemisolid-hemiliquid” surfaces, Varanasi and Smith invented permanently wet “liquid-impregnated surfaces” — coatings that don’t have such microscopic gaps. The coatings consist of textured solid material that traps a liquid lubricant through capillary and intermolecular forces. The coating wicks through the textured solid surface, clinging permanently under the product, allowing the product to slide off the surface easily; other materials can’t enter the gaps or displace the coating. “One can say that it’s a self-lubricating surface,” Varanasi says.

Mixing and matching the materials, however, is a complicated process, Varanasi says. Liquid components of the coating, for instance, must be compatible with the chemical and physical properties of the sticky product, and generally immiscible. The solid material must form a textured structure while adhering to the container. And the coating can’t spoil the contents: Foodstuffs, for instance, require safe, edible materials, such as plants and insoluble fibers.

To help choose ingredients, Smith and Varanasi developed the basic scientific principles and algorithms that calculate how the liquid and solid coating materials, and the product, as well as the geometry of the surface structures will all interact to find the optimal “recipe.”

Today, LiquiGlide develops coatings for clients and licenses the recipes to them. Included are instructions that detail the materials, equipment, and process required to create and apply the coating for their specific needs. “The state of the coating we end up with depends entirely on the properties of the product you want to slide over the surface,” says Smith, now LiquiGlide’s CEO.

Having researched materials for hundreds of different viscous liquids over the years — from peanut butter to crude oil to blood — LiquiGlide also has a database of optimal ingredients for its algorithms to pull from when customizing recipes. “Given any new product you want LiquiGlide for, we can zero in on a solution that meets all requirements necessary,” Varanasi says.

MIT: A lab for entrepreneurs

For years, Smith and Varanasi toyed around with commercial applications for LiquiGlide. But in 2012, with help from MIT’s entrepreneurial ecosystem, LiquiGlide went from lab to market in a matter of months.

Initially the idea was to bring coatings to the oil and gas industry. But one day, in early 2012, Varanasi saw his wife struggling to pour honey from its container. “And I thought, ‘We have a solution for that,’” Varanasi says.

The focus then became consumer packaging. Smith and Varanasi took the idea through several entrepreneurship classes — such as 6.933 (Entrepreneurship in Engineering: The Founder’s Journey) — and MIT’s Venture Mentoring Service and Innovation Teams, where student teams research the commercial potential of MIT technologies.

“I did pretty much every last thing you could do,” Smith says. “Because we have such a brilliant network here at MIT, I thought I should take advantage of it.”

That May [2012], Smith, Varanasi, and several MIT students entered LiquiGlide in the MIT $100K Entrepreneurship Competition, earning the Audience Choice Award — and the national spotlight. A video of ketchup sliding out of a LiquiGlide-coated bottle went viral. Numerous media outlets picked up the story, while hundreds of companies reached out to Varanasi to buy the coating. “My phone didn’t stop ringing, my website crashed for a month,” Varanasi says. “It just went crazy.”

That summer [2012], Smith and Varanasi took their startup idea to MIT’s Global Founders’ Skills Accelerator program, which introduced them to a robust network of local investors and helped them build a solid business plan. Soon after, they raised money from family and friends, and won $100,000 at the MassChallenge Entrepreneurship Competition.

When LiquiGlide Inc. launched in August 2012, clients were already knocking down the door. The startup chose a select number to pay for the development and testing of the coating for its products. Within a year, LiquiGlide was cash-flow positive, and had grown from three to 18 employees in its current Cambridge headquarters.

Looking back, Varanasi attributes much of LiquiGlide’s success to MIT’s innovation-based ecosystem, which promotes rapid prototyping for the marketplace through experimentation and collaboration. This ecosystem includes the Deshpande Center for Technological Innovation, the Martin Trust Center for MIT Entrepreneurship, the Venture Mentoring Service, and the Technology Licensing Office, among other initiatives. “Having a lab where we could think about … translating the technology to real-world applications, and having this ability to meet people, and bounce ideas … that whole MIT ecosystem was key,” Varanasi says.

Here’s the latest LiquiGlide video,


Credits:

Video: Melanie Gonick/MIT
Additional footage courtesy of LiquiGlide™
Music sampled from “Candlepower” by Chris Zabriskie
https://freemusicarchive.org/music/Ch…
http://creativecommons.org/licenses/b…

I had thought the EU (European Union) offered more roadblocks to marketing nanotechnology-enabled products used in food packaging than the US. If anyone knows why a US company would market its products in Europe first I would love to find out.

Solar-powered sensors to power the Internet of Things?

As a June 23, 2015 news item on Nanowerk notes, the ‘nternet of things’, will need lots and lots of power,

The latest buzz in the information technology industry regards “the Internet of things” — the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock would have their own embedded sensors that report information directly to networked servers, aiding with maintenance and the coordination of tasks.

Realizing that vision, however, will require extremely low-power sensors that can run for months without battery changes — or, even better, that can extract energy from the environment to recharge.

Last week, at the Symposia on VLSI Technology and Circuits, MIT [Massachusetts Institute of Technology] researchers presented a new power converter chip that can harvest more than 80 percent of the energy trickling into it, even at the extremely low power levels characteristic of tiny solar cells. [emphasis mine] Previous experimental ultralow-power converters had efficiencies of only 40 or 50 percent.

A June 22, 2015 MIT news release (also on EurekAlert), which originated the news item, describes some additional capabilities,

Moreover, the researchers’ chip achieves those efficiency improvements while assuming additional responsibilities. Where its predecessors could use a solar cell to either charge a battery or directly power a device, this new chip can do both, and it can power the device directly from the battery.

All of those operations also share a single inductor — the chip’s main electrical component — which saves on circuit board space but increases the circuit complexity even further. Nonetheless, the chip’s power consumption remains low.

“We still want to have battery-charging capability, and we still want to provide a regulated output voltage,” says Dina Reda El-Damak, an MIT graduate student in electrical engineering and computer science and first author on the new paper. “We need to regulate the input to extract the maximum power, and we really want to do all these tasks with inductor sharing and see which operational mode is the best. And we want to do it without compromising the performance, at very limited input power levels — 10 nanowatts to 1 microwatt — for the Internet of things.”

The prototype chip was manufactured through the Taiwan Semiconductor Manufacturing Company’s University Shuttle Program.

The MIT news release goes on to describe chip specifics,

The circuit’s chief function is to regulate the voltages between the solar cell, the battery, and the device the cell is powering. If the battery operates for too long at a voltage that’s either too high or too low, for instance, its chemical reactants break down, and it loses the ability to hold a charge.

To control the current flow across their chip, El-Damak and her advisor, Anantha Chandrakasan, the Joseph F. and Nancy P. Keithley Professor in Electrical Engineering, use an inductor, which is a wire wound into a coil. When a current passes through an inductor, it generates a magnetic field, which in turn resists any change in the current.

Throwing switches in the inductor’s path causes it to alternately charge and discharge, so that the current flowing through it continuously ramps up and then drops back down to zero. Keeping a lid on the current improves the circuit’s efficiency, since the rate at which it dissipates energy as heat is proportional to the square of the current.

Once the current drops to zero, however, the switches in the inductor’s path need to be thrown immediately; otherwise, current could begin to flow through the circuit in the wrong direction, which would drastically diminish its efficiency. The complication is that the rate at which the current rises and falls depends on the voltage generated by the solar cell, which is highly variable. So the timing of the switch throws has to vary, too.

Electric hourglass

To control the switches’ timing, El-Damak and Chandrakasan use an electrical component called a capacitor, which can store electrical charge. The higher the current, the more rapidly the capacitor fills. When it’s full, the circuit stops charging the inductor.

The rate at which the current drops off, however, depends on the output voltage, whose regulation is the very purpose of the chip. Since that voltage is fixed, the variation in timing has to come from variation in capacitance. El-Damak and Chandrakasan thus equip their chip with a bank of capacitors of different sizes. As the current drops, it charges a subset of those capacitors, whose selection is determined by the solar cell’s voltage. Once again, when the capacitor fills, the switches in the inductor’s path are flipped.

“In this technology space, there’s usually a trend to lower efficiency as the power gets lower, because there’s a fixed amount of energy that’s consumed by doing the work,” says Brett Miwa, who leads a power conversion development project as a fellow at the chip manufacturer Maxim Integrated. “If you’re only coming in with a small amount, it’s hard to get most of it out, because you lose more as a percentage. [El-Damak’s] design is unusually efficient for how low a power level she’s at.”

“One of the things that’s most notable about it is that it’s really a fairly complete system,” he adds. “It’s really kind of a full system-on-a chip for power management. And that makes it a little more complicated, a little bit larger, and a little bit more comprehensive than some of the other designs that might be reported in the literature. So for her to still achieve these high-performance specs in a much more sophisticated system is also noteworthy.”

I wonder how close they are to commercializing this chip (see below),

The MIT researchers' prototype for a chip measuring 3 millimeters by 3 millimeters. The magnified detail shows the chip's main control circuitry, including the startup electronics; the controller that determines whether to charge the battery, power a device, or both; and the array of switches that control current flow to an external inductor coil. This active area measures just 2.2 millimeters by 1.1 millimeters. (click on image to enlarge) Read more: Toward tiny, solar-powered sensors. Courtesy: MIT

The MIT researchers’ prototype for a chip measuring 3 millimeters by 3 millimeters. The magnified detail shows the chip’s main control circuitry, including the startup electronics; the controller that determines whether to charge the battery, power a device, or both; and the array of switches that control current flow to an external inductor coil. This active area measures just 2.2 millimeters by 1.1 millimeters. (click on image to enlarge)
Courtesy: MIT

I sing the body cyber: two projects funded by the US National Science Foundation

Points to anyone who recognized the reference to Walt Whitman’s poem, “I sing the body electric,” from his classic collection, Leaves of Grass (1867 edition; h/t Wikipedia entry). I wonder if the cyber physical systems (CPS) work being funded by the US National Science Foundation (NSF) in the US will occasion poetry too.

More practically, a May 15, 2015 news item on Nanowerk, describes two cyber physical systems (CPS) research projects newly funded by the NSF,

Today [May 12, 2015] the National Science Foundation (NSF) announced two, five-year, center-scale awards totaling $8.75 million to advance the state-of-the-art in medical and cyber-physical systems (CPS).

One project will develop “Cyberheart”–a platform for virtual, patient-specific human heart models and associated device therapies that can be used to improve and accelerate medical-device development and testing. The other project will combine teams of microrobots with synthetic cells to perform functions that may one day lead to tissue and organ re-generation.

CPS are engineered systems that are built from, and depend upon, the seamless integration of computation and physical components. Often called the “Internet of Things,” CPS enable capabilities that go beyond the embedded systems of today.

“NSF has been a leader in supporting research in cyber-physical systems, which has provided a foundation for putting the ‘smart’ in health, transportation, energy and infrastructure systems,” said Jim Kurose, head of Computer & Information Science & Engineering at NSF. “We look forward to the results of these two new awards, which paint a new and compelling vision for what’s possible for smart health.”

Cyber-physical systems have the potential to benefit many sectors of our society, including healthcare. While advances in sensors and wearable devices have the capacity to improve aspects of medical care, from disease prevention to emergency response, and synthetic biology and robotics hold the promise of regenerating and maintaining the body in radical new ways, little is known about how advances in CPS can integrate these technologies to improve health outcomes.

These new NSF-funded projects will investigate two very different ways that CPS can be used in the biological and medical realms.

A May 12, 2015 NSF news release (also on EurekAlert), which originated the news item, describes the two CPS projects,

Bio-CPS for engineering living cells

A team of leading computer scientists, roboticists and biologists from Boston University, the University of Pennsylvania and MIT have come together to develop a system that combines the capabilities of nano-scale robots with specially designed synthetic organisms. Together, they believe this hybrid “bio-CPS” will be capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.

“We bring together synthetic biology and micron-scale robotics to engineer the emergence of desired behaviors in populations of bacterial and mammalian cells,” said Calin Belta, a professor of mechanical engineering, systems engineering and bioinformatics at Boston University and principal investigator on the project. “This project will impact several application areas ranging from tissue engineering to drug development.”

The project builds on previous research by each team member in diverse disciplines and early proof-of-concept designs of bio-CPS. According to the team, the research is also driven by recent advances in the emerging field of synthetic biology, in particular the ability to rapidly incorporate new capabilities into simple cells. Researchers so far have not been able to control and coordinate the behavior of synthetic cells in isolation, but the introduction of microrobots that can be externally controlled may be transformative.

In this new project, the team will focus on bio-CPS with the ability to sense, transport and work together. As a demonstration of their idea, they will develop teams of synthetic cell/microrobot hybrids capable of constructing a complex, fabric-like surface.

Vijay Kumar (University of Pennsylvania), Ron Weiss (MIT), and Douglas Densmore (BU) are co-investigators of the project.

Medical-CPS and the ‘Cyberheart’

CPS such as wearable sensors and implantable devices are already being used to assess health, improve quality of life, provide cost-effective care and potentially speed up disease diagnosis and prevention. [emphasis mine]

Extending these efforts, researchers from seven leading universities and centers are working together to develop far more realistic cardiac and device models than currently exist. This so-called “Cyberheart” platform can be used to test and validate medical devices faster and at a far lower cost than existing methods. CyberHeart also can be used to design safe, patient-specific device therapies, thereby lowering the risk to the patient.

“Innovative ‘virtual’ design methodologies for implantable cardiac medical devices will speed device development and yield safer, more effective devices and device-based therapies, than is currently possible,” said Scott Smolka, a professor of computer science at Stony Brook University and one of the principal investigators on the award.

The group’s approach combines patient-specific computational models of heart dynamics with advanced mathematical techniques for analyzing how these models interact with medical devices. The analytical techniques can be used to detect potential flaws in device behavior early on during the device-design phase, before animal and human trials begin. They also can be used in a clinical setting to optimize device settings on a patient-by-patient basis before devices are implanted.

“We believe that our coordinated, multi-disciplinary approach, which balances theoretical, experimental and practical concerns, will yield transformational results in medical-device design and foundations of cyber-physical system verification,” Smolka said.

The team will develop virtual device models which can be coupled together with virtual heart models to realize a full virtual development platform that can be subjected to computational analysis and simulation techniques. Moreover, they are working with experimentalists who will study the behavior of virtual and actual devices on animals’ hearts.

Co-investigators on the project include Edmund Clarke (Carnegie Mellon University), Elizabeth Cherry (Rochester Institute of Technology), W. Rance Cleaveland (University of Maryland), Flavio Fenton (Georgia Tech), Rahul Mangharam (University of Pennsylvania), Arnab Ray (Fraunhofer Center for Experimental Software Engineering [Germany]) and James Glimm and Radu Grosu (Stony Brook University). Richard A. Gray of the U.S. Food and Drug Administration is another key contributor.

It is fascinating to observe how terminology is shifting from pacemakers and deep brain stimulators as implants to “CPS such as wearable sensors and implantable devices … .” A new category has been created, CPS, which conjoins medical devices with other sensing devices such as wearable fitness monitors found in the consumer market. I imagine it’s an attempt to quell fears about injecting strange things into or adding strange things to your body—microrobots and nanorobots partially derived from synthetic biology research which are “… capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.” They’ve also sneaked in a reference to synthetic biology, an area of research where some concerns have been expressed, from my March 19, 2013 post about a poll and synthetic biology concerns,

In our latest survey, conducted in January 2013, three-fourths of respondents say they have heard little or nothing about synthetic biology, a level consistent with that measured in 2010. While initial impressions about the science are largely undefined, these feelings do not necessarily become more positive as respondents learn more. The public has mixed reactions to specific synthetic biology applications, and almost one-third of respondents favor a ban “on synthetic biology research until we better understand its implications and risks,” while 61 percent think the science should move forward.

I imagine that for scientists, 61% in favour of more research is not particularly comforting given how easily and quickly public opinion can shift.

Sealing graphene’s defects to make a better filtration device

Making a graphene filter that allows water to pass through while screening out salt and/or noxious materials has been more challenging than one might think. According to a May 7, 2015 news item on Nanowerk, graphene filters can be ‘leaky’,

For faster, longer-lasting water filters, some scientists are looking to graphene –thin, strong sheets of carbon — to serve as ultrathin membranes, filtering out contaminants to quickly purify high volumes of water.

Graphene’s unique properties make it a potentially ideal membrane for water filtration or desalination. But there’s been one main drawback to its wider use: Making membranes in one-atom-thick layers of graphene is a meticulous process that can tear the thin material — creating defects through which contaminants can leak.

Now engineers at MIT [Massachusetts Institute of Technology], Oak Ridge National Laboratory, and King Fahd University of Petroleum and Minerals (KFUPM) have devised a process to repair these leaks, filling cracks and plugging holes using a combination of chemical deposition and polymerization techniques. The team then used a process it developed previously to create tiny, uniform pores in the material, small enough to allow only water to pass through.

A May 8, 2015 MIT news release (also on EurkeAlert), which originated the news item, expands on the theme,

Combining these two techniques, the researchers were able to engineer a relatively large defect-free graphene membrane — about the size of a penny. The membrane’s size is significant: To be exploited as a filtration membrane, graphene would have to be manufactured at a scale of centimeters, or larger.

In experiments, the researchers pumped water through a graphene membrane treated with both defect-sealing and pore-producing processes, and found that water flowed through at rates comparable to current desalination membranes. The graphene was able to filter out most large-molecule contaminants, such as magnesium sulfate and dextran.

Rohit Karnik, an associate professor of mechanical engineering at MIT, says the group’s results, published in the journal Nano Letters, represent the first success in plugging graphene’s leaks.

“We’ve been able to seal defects, at least on the lab scale, to realize molecular filtration across a macroscopic area of graphene, which has not been possible before,” Karnik says. “If we have better process control, maybe in the future we don’t even need defect sealing. But I think it’s very unlikely that we’ll ever have perfect graphene — there will always be some need to control leakages. These two [techniques] are examples which enable filtration.”

Sean O’Hern, a former graduate research assistant at MIT, is the paper’s first author. Other contributors include MIT graduate student Doojoon Jang, former graduate student Suman Bose, and Professor Jing Kong.

A delicate transfer

“The current types of membranes that can produce freshwater from saltwater are fairly thick, on the order of 200 nanometers,” O’Hern says. “The benefit of a graphene membrane is, instead of being hundreds of nanometers thick, we’re on the order of three angstroms — 600 times thinner than existing membranes. This enables you to have a higher flow rate over the same area.”

O’Hern and Karnik have been investigating graphene’s potential as a filtration membrane for the past several years. In 2009, the group began fabricating membranes from graphene grown on copper — a metal that supports the growth of graphene across relatively large areas. However, copper is impermeable, requiring the group to transfer the graphene to a porous substrate following fabrication.

However, O’Hern noticed that this transfer process would create tears in graphene. What’s more, he observed intrinsic defects created during the growth process, resulting perhaps from impurities in the original material.

Plugging graphene’s leaks

To plug graphene’s leaks, the team came up with a technique to first tackle the smaller intrinsic defects, then the larger transfer-induced defects. For the intrinsic defects, the researchers used a process called “atomic layer deposition,” placing the graphene membrane in a vacuum chamber, then pulsing in a hafnium-containing chemical that does not normally interact with graphene. However, if the chemical comes in contact with a small opening in graphene, it will tend to stick to that opening, attracted by the area’s higher surface energy.

The team applied several rounds of atomic layer deposition, finding that the deposited hafnium oxide successfully filled in graphene’s nanometer-scale intrinsic defects. However, O’Hern realized that using the same process to fill in much larger holes and tears — on the order of hundreds of nanometers — would require too much time.

Instead, he and his colleagues came up with a second technique to fill in larger defects, using a process called “interfacial polymerization” that is often employed in membrane synthesis. After they filled in graphene’s intrinsic defects, the researchers submerged the membrane at the interface of two solutions: a water bath and an organic solvent that, like oil, does not mix with water.

In the two solutions, the researchers dissolved two different molecules that can react to form nylon. Once O’Hern placed the graphene membrane at the interface of the two solutions, he observed that nylon plugs formed only in tears and holes — regions where the two molecules could come in contact because of tears in the otherwise impermeable graphene — effectively sealing the remaining defects.

Using a technique they developed last year, the researchers then etched tiny, uniform holes in graphene — small enough to let water molecules through, but not larger contaminants. In experiments, the group tested the membrane with water containing several different molecules, including salt, and found that the membrane rejected up to 90 percent of larger molecules. However, it let salt through at a faster rate than water.

The preliminary tests suggest that graphene may be a viable alternative to existing filtration membranes, although Karnik says techniques to seal its defects and control its permeability will need further improvements.

“Water desalination and nanofiltration are big applications where, if things work out and this technology withstands the different demands of real-world tests, it would have a large impact,” Karnik says. “But one could also imagine applications for fine chemical- or biological-sample processing, where these membranes could be useful. And this is the first report of a centimeter-scale graphene membrane that does any kind of molecular filtration. That’s exciting.”

De-en Jiang, an assistant professor of chemistry at the University of California at Riverside, sees the defect-sealing technique as “a great advance toward making graphene filtration a reality.”

“The two-step technique is very smart: sealing the defects while preserving the desired pores for filtration,” says Jiang, who did not contribute to the research. “This would make the scale-up much easier. One can produce a large graphene membrane first, not worrying about the defects, which can be sealed later.”

I have featured graphene and water desalination work before  from these researchers at MIT in a Feb. 27, 2014 posting. Interestingly, there was no mention of problems with defects in the news release highlighting this previous work.

Here’s a link to and a citation for the latest paper,

Nanofiltration across Defect-Sealed Nanoporous Monolayer Graphene by Sean C. O’Hern, Doojoon Jang, Suman Bose, Juan-Carlos Idrobo, Yi Song §, Tahar Laoui, Jing Kong, and Rohit Karnik. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.5b00456 Publication Date (Web): April 27, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

Carbon nanotubes sense spoiled food

CNT_FoodSpolage

Courtesy: MIT (Massachusetts Institute of Technology)

I love this .gif; it says a lot without a word. However for details, you need words and here’s what an April 15, 2015 news item on Nanowerk has to say about the research illustrated by the .gif,

MIT [Massachusetts Institute of Technology] chemists have devised an inexpensive, portable sensor that can detect gases emitted by rotting meat, allowing consumers to determine whether the meat in their grocery store or refrigerator is safe to eat.

The sensor, which consists of chemically modified carbon nanotubes, could be deployed in “smart packaging” that would offer much more accurate safety information than the expiration date on the package, says Timothy Swager, the John D. MacArthur Professor of Chemistry at MIT.

An April 14, 2015 MIT news release (also on EurekAlert), which originated the news item, offers more from Dr. Swager,

It could also cut down on food waste, he adds. “People are constantly throwing things out that probably aren’t bad,” says Swager, who is the senior author of a paper describing the new sensor this week in the journal Angewandte Chemie.

This latest study is builds on previous work at Swager’s lab (Note: Links have been removed),

The sensor is similar to other carbon nanotube devices that Swager’s lab has developed in recent years, including one that detects the ripeness of fruit. All of these devices work on the same principle: Carbon nanotubes can be chemically modified so that their ability to carry an electric current changes in the presence of a particular gas.

In this case, the researchers modified the carbon nanotubes with metal-containing compounds called metalloporphyrins, which contain a central metal atom bound to several nitrogen-containing rings. Hemoglobin, which carries oxygen in the blood, is a metalloporphyrin with iron as the central atom.

For this sensor, the researchers used a metalloporphyrin with cobalt at its center. Metalloporphyrins are very good at binding to nitrogen-containing compounds called amines. Of particular interest to the researchers were the so-called biogenic amines, such as putrescine and cadaverine, which are produced by decaying meat.

When the cobalt-containing porphyrin binds to any of these amines, it increases the electrical resistance of the carbon nanotube, which can be easily measured.

“We use these porphyrins to fabricate a very simple device where we apply a potential across the device and then monitor the current. When the device encounters amines, which are markers of decaying meat, the current of the device will become lower,” Liu says.

In this study, the researchers tested the sensor on four types of meat: pork, chicken, cod, and salmon. They found that when refrigerated, all four types stayed fresh over four days. Left unrefrigerated, the samples all decayed, but at varying rates.

There are other sensors that can detect the signs of decaying meat, but they are usually large and expensive instruments that require expertise to operate. “The advantage we have is these are the cheapest, smallest, easiest-to-manufacture sensors,” Swager says.

“There are several potential advantages in having an inexpensive sensor for measuring, in real time, the freshness of meat and fish products, including preventing foodborne illness, increasing overall customer satisfaction, and reducing food waste at grocery stores and in consumers’ homes,” says Roberto Forloni, a senior science fellow at Sealed Air, a major supplier of food packaging, who was not part of the research team.

The new device also requires very little power and could be incorporated into a wireless platform Swager’s lab recently developed that allows a regular smartphone to read output from carbon nanotube sensors such as this one.

The funding sources are interesting, as I am appreciating with increasing frequency these days (from the news release),

The researchers have filed for a patent on the technology and hope to license it for commercial development. The research was funded by the National Science Foundation and the Army Research Office through MIT’s Institute for Soldier Nanotechnologies.

Here’s a link to and a citation for the paper,

Single-Walled Carbon Nanotube/Metalloporphyrin Composites for the Chemiresistive Detection of Amines and Meat Spoilage by Sophie F. Liu, Alexander R. Petty, Dr. Graham T. Sazama, and Timothy M. Swager. Angewandte Chemie International Edition DOI: 10.1002/anie.201501434 Article first published online: 13 APR 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This article is behind a paywall.

There are other posts here about the quest to create food sensors including this Sept. 26, 2013 piece which features a critique (by another blogger) about trying to create food sensors that may be more expensive than the item they are protecting, a problem Swager claims to have overcome in an April 17, 2015 article by Ben Schiller for Fast Company (Note: Links have been removed),

Swager has set up a company to commercialize the technology and he expects to do the first demonstrations to interested clients this summer. The first applications are likely to be for food workers working with meat and fish, but there’s no reason why consumers shouldn’t get their own devices in due time.

There are efforts to create visual clues for food status. But Swager says his method is better because it doesn’t rely on perception: it produces hard data that can be logged and tracked. And it also has potential to be very cheap.

“The resistance method is a game-changer because it’s two to three orders of magnitude cheaper than other technology. It’s hard to imagine doing this cheaper,” he says.