Tag Archives: MIT

Nanomaterials and UV (ultraviolet) light for environmental cleanups

I think this is the first time I’ve seen anything about a technology that removes toxic materials from both water and soil; it’s usually one or the other. A July 22, 2015 news item on Nanowerk makes the announcement (Note: A link has been removed),

Many human-made pollutants in the environment resist degradation through natural processes, and disrupt hormonal and other systems in mammals and other animals. Removing these toxic materials — which include pesticides and endocrine disruptors such as bisphenol A (BPA) — with existing methods is often expensive and time-consuming.

In a new paper published this week in Nature Communications (“Nanoparticles with photoinduced precipitation for the extraction of pollutants from water and soil”), researchers from MIT [Massachusetts Institute of Technology] and the Federal University of Goiás in Brazil demonstrate a novel method for using nanoparticles and ultraviolet (UV) light to quickly isolate and extract a variety of contaminants from soil and water.

A July 21, 2015 MIT news release by Jonathan Mingle, which originated the news item, describes the inspiration and the research in more detail,

Ferdinand Brandl and Nicolas Bertrand, the two lead authors, are former postdocs in the laboratory of Robert Langer, the David H. Koch Institute Professor at MIT’s Koch Institute for Integrative Cancer Research. (Eliana Martins Lima, of the Federal University of Goiás, is the other co-author.) Both Brandl and Bertrand are trained as pharmacists, and describe their discovery as a happy accident: They initially sought to develop nanoparticles that could be used to deliver drugs to cancer cells.

Brandl had previously synthesized polymers that could be cleaved apart by exposure to UV light. But he and Bertrand came to question their suitability for drug delivery, since UV light can be damaging to tissue and cells, and doesn’t penetrate through the skin. When they learned that UV light was used to disinfect water in certain treatment plants, they began to ask a different question.

“We thought if they are already using UV light, maybe they could use our particles as well,” Brandl says. “Then we came up with the idea to use our particles to remove toxic chemicals, pollutants, or hormones from water, because we saw that the particles aggregate once you irradiate them with UV light.”

A trap for ‘water-fearing’ pollution

The researchers synthesized polymers from polyethylene glycol, a widely used compound found in laxatives, toothpaste, and eye drops and approved by the Food and Drug Administration as a food additive, and polylactic acid, a biodegradable plastic used in compostable cups and glassware.

Nanoparticles made from these polymers have a hydrophobic core and a hydrophilic shell. Due to molecular-scale forces, in a solution hydrophobic pollutant molecules move toward the hydrophobic nanoparticles, and adsorb onto their surface, where they effectively become “trapped.” This same phenomenon is at work when spaghetti sauce stains the surface of plastic containers, turning them red: In that case, both the plastic and the oil-based sauce are hydrophobic and interact together.

If left alone, these nanomaterials would remain suspended and dispersed evenly in water. But when exposed to UV light, the stabilizing outer shell of the particles is shed, and — now “enriched” by the pollutants — they form larger aggregates that can then be removed through filtration, sedimentation, or other methods.

The researchers used the method to extract phthalates, hormone-disrupting chemicals used to soften plastics, from wastewater; BPA, another endocrine-disrupting synthetic compound widely used in plastic bottles and other resinous consumer goods, from thermal printing paper samples; and polycyclic aromatic hydrocarbons, carcinogenic compounds formed from incomplete combustion of fuels, from contaminated soil.

The process is irreversible and the polymers are biodegradable, minimizing the risks of leaving toxic secondary products to persist in, say, a body of water. “Once they switch to this macro situation where they’re big clumps,” Bertrand says, “you won’t be able to bring them back to the nano state again.”

The fundamental breakthrough, according to the researchers, was confirming that small molecules do indeed adsorb passively onto the surface of nanoparticles.

“To the best of our knowledge, it is the first time that the interactions of small molecules with pre-formed nanoparticles can be directly measured,” they write in Nature Communications.

Nano cleansing

Even more exciting, they say, is the wide range of potential uses, from environmental remediation to medical analysis.

The polymers are synthesized at room temperature, and don’t need to be specially prepared to target specific compounds; they are broadly applicable to all kinds of hydrophobic chemicals and molecules.

“The interactions we exploit to remove the pollutants are non-specific,” Brandl says. “We can remove hormones, BPA, and pesticides that are all present in the same sample, and we can do this in one step.”

And the nanoparticles’ high surface-area-to-volume ratio means that only a small amount is needed to remove a relatively large quantity of pollutants. The technique could thus offer potential for the cost-effective cleanup of contaminated water and soil on a wider scale.

“From the applied perspective, we showed in a system that the adsorption of small molecules on the surface of the nanoparticles can be used for extraction of any kind,” Bertrand says. “It opens the door for many other applications down the line.”

This approach could possibly be further developed, he speculates, to replace the widespread use of organic solvents for everything from decaffeinating coffee to making paint thinners. Bertrand cites DDT, banned for use as a pesticide in the U.S. since 1972 but still widely used in other parts of the world, as another example of a persistent pollutant that could potentially be remediated using these nanomaterials. “And for analytical applications where you don’t need as much volume to purify or concentrate, this might be interesting,” Bertrand says, offering the example of a cheap testing kit for urine analysis of medical patients.

The study also suggests the broader potential for adapting nanoscale drug-delivery techniques developed for use in environmental remediation.

“That we can apply some of the highly sophisticated, high-precision tools developed for the pharmaceutical industry, and now look at the use of these technologies in broader terms, is phenomenal,” says Frank Gu, an assistant professor of chemical engineering at the University of Waterloo in Canada, and an expert in nanoengineering for health care and medical applications.

“When you think about field deployment, that’s far down the road, but this paper offers a really exciting opportunity to crack a problem that is persistently present,” says Gu, who was not involved in the research. “If you take the normal conventional civil engineering or chemical engineering approach to treating it, it just won’t touch it. That’s where the most exciting part is.”

The researchers have made this illustration of their work available,

Nanoparticles that lose their stability upon irradiation with light have been designed to extract endocrine disruptors, pesticides, and other contaminants from water and soils. The system exploits the large surface-to-volume ratio of nanoparticles, while the photoinduced precipitation ensures nanomaterials are not released in the environment. Image: Nicolas Bertrand Courtesy: MIT

Nanoparticles that lose their stability upon irradiation with light have been designed to extract endocrine disruptors, pesticides, and other contaminants from water and soils. The system exploits the large surface-to-volume ratio of nanoparticles, while the photoinduced precipitation ensures nanomaterials are not released in the environment.
Image: Nicolas Bertrand Courtesy: MIT

Here’s a link to and a citation for the paper,

Nanoparticles with photoinduced precipitation for the extraction of pollutants from water and soil by Ferdinand Brandl, Nicolas Bertrand, Eliana Martins Lima & Robert Langer. Nature Communications 6, Article number: 7765 doi:10.1038/ncomms8765 Published 21 July 2015

This paper is open access.

IBM and its working 7nm test chip

I wrote abut IBM and its plans for a 7nm computer chip last year in a July 11, 2014 posting, which featured IBM and mention of HP Labs and other company’s plans for shrinking their computer chips. Almost one year later, IBM has announced, in a July 9, 2015 IBM news release on PRnewswire.com the accomplishment of a working 7nm test chip,

An alliance led by IBM Research (NYSE: IBM) today announced that it has produced the semiconductor industry’s first 7nm (nanometer) node test chips with functioning transistors.  The breakthrough, accomplished in partnership with GLOBALFOUNDRIES and Samsung at SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering (SUNY Poly CNSE), could result in the ability to place more than 20 billion tiny switches — transistors — on the fingernail-sized chips that power everything from smartphones to spacecraft.

To achieve the higher performance, lower power and scaling benefits promised by 7nm technology, researchers had to bypass conventional semiconductor manufacturing approaches. Among the novel processes and techniques pioneered by the IBM Research alliance were a number of industry-first innovations, most notably Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels.

Industry experts consider 7nm technology crucial to meeting the anticipated demands of future cloud computing and Big Data systems, cognitive computing, mobile products and other emerging technologies. Part of IBM’s $3 billion, five-year investment in chip R&D (announced in 2014), this accomplishment was made possible through a unique public-private partnership with New York State and joint development alliance with GLOBALFOUNDRIES, Samsung and equipment suppliers. The team is based at SUNY Poly’s NanoTech Complex in Albany [New York state].

“For business and society to get the most out of tomorrow’s computers and devices, scaling to 7nm and beyond is essential,” said Arvind Krishna, senior vice president and director of IBM Research. “That’s why IBM has remained committed to an aggressive basic research agenda that continually pushes the limits of semiconductor technology. Working with our partners, this milestone builds on decades of research that has set the pace for the microelectronics industry, and positions us to advance our leadership for years to come.”

Microprocessors utilizing 22nm and 14nm technology power today’s servers, cloud data centers and mobile devices, and 10nm technology is well on the way to becoming a mature technology. The IBM Research-led alliance achieved close to 50 percent area scaling improvements over today’s most advanced technology, introduced SiGe channel material for transistor performance enhancement at 7nm node geometries, process innovations to stack them below 30nm pitch and full integration of EUV lithography at multiple levels. These techniques and scaling could result in at least a 50 percent power/performance improvement for next generation mainframe and POWER systems that will power the Big Data, cloud and mobile era.

“Governor Andrew Cuomo’s trailblazing public-private partnership model is catalyzing historic innovation and advancement. Today’s [July 8, 2015] announcement is just one example of our collaboration with IBM, which furthers New York State’s global leadership in developing next generation technologies,” said Dr. Michael Liehr, SUNY Poly Executive Vice President of Innovation and Technology and Vice President of Research.  “Enabling the first 7nm node transistors is a significant milestone for the entire semiconductor industry as we continue to push beyond the limitations of our current capabilities.”

“Today’s announcement marks the latest achievement in our long history of collaboration to accelerate development of next-generation technology,” said Gary Patton, CTO and Head of Worldwide R&D at GLOBALFOUNDRIES. “Through this joint collaborative program based at the Albany NanoTech Complex, we are able to maintain our focus on technology leadership for our clients and partners by helping to address the development challenges central to producing a smaller, faster, more cost efficient generation of semiconductors.”

The 7nm node milestone continues IBM’s legacy of historic contributions to silicon and semiconductor innovation. They include the invention or first implementation of the single cell DRAM, the Dennard Scaling Laws, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator, strained engineering, multi core microprocessors, immersion lithography, high speed SiGe, High-k gate dielectrics, embedded DRAM, 3D chip stacking and Air gap insulators.

In 2014, they were talking about carbon nanotubes with regard to the 7nm chip, this shift to silicon germanium is interesting.

Sebastian Anthony in a July 9, 2015 article for Ars Technica offers some intriguing insight into the accomplishment and the technology (Note: A link has been removed),

… While it should be stressed that commercial 7nm chips remain at least two years away, this test chip from IBM and its partners is extremely significant for three reasons: it’s a working sub-10nm chip (this is pretty significant in itself); it’s the first commercially viable sub-10nm FinFET logic chip that uses silicon-germanium as the channel material; and it appears to be the first commercially viable design produced with extreme ultraviolet (EUV) lithography.

Technologically, SiGe and EUV are both very significant. SiGe has higher electron mobility than pure silicon, which makes it better suited for smaller transistors. The gap between two silicon nuclei is about 0.5nm; as the gate width gets ever smaller (about 7nm in this case), the channel becomes so small that the handful of silicon atoms can’t carry enough current. By mixing some germanium into the channel, electron mobility increases, and adequate current can flow. Silicon generally runs into problems at sub-10nm nodes, and we can expect Intel and TSMC to follow a similar path to IBM, GlobalFoundries, and Samsung (aka the Common Platform alliance).

EUV lithography is an more interesting innovation. Basically, as chip features get smaller, you need a narrower beam of light to etch those features accurately, or you need to use multiple patterning (which we won’t go into here). The current state of the art for lithography is a 193nm ArF (argon fluoride) laser; that is, the wavelength is 193nm wide. Complex optics and multiple painstaking steps are required to etch 14nm features using a 193nm light source. EUV has a wavelength of just 13.5nm, which will handily take us down into the sub-10nm realm, but so far it has proven very difficult and expensive to deploy commercially (it has been just around the corner for quite a few years now).

If you’re interested in the nuances, I recommend reading Anthony’s article in its entirety.

One final comment, there was no discussion of electrodes or other metallic components associated with computer chips. The metallic components are a topic of some interest to me (anyway), given some research published by scientists at the Massachusetts Institute of Technology (MIT) last year. From my Oct. 14, 2014 posting,

Research from the Massachusetts Institute of Technology (MIT) has revealed a new property of metal nanoparticles, in this case, silver. From an Oct. 12, 2014 news item on ScienceDaily,

A surprising phenomenon has been found in metal nanoparticles: They appear, from the outside, to be liquid droplets, wobbling and readily changing shape, while their interiors retain a perfectly stable crystal configuration.

The research team behind the finding, led by MIT professor Ju Li, says the work could have important implications for the design of components in nanotechnology, such as metal contacts for molecular electronic circuits. [my emphasis added]

This discovery and others regarding materials and phase changes at ever diminishing sizes hint that a computer with a functioning 7nm chip might be a bit further off than IBM is suggesting.

Yarns of niobium nanowire for small electronic device boost at the University of British Columbia (Canada) and Massachusetts Institute of Technology (US)

It turns out that this research concerning supercapacitors is a collaboration between the University of British Columbia (Canada) and the Massachusetts Institute of Technology (MIT). From a July 7, 2015 news item by Stuart Milne for Azonano,

A team of researchers from MIT and University of British Columbia has discovered an innovative method to deliver short bursts of high power required by wearable electronic devices.

Such devices are used for monitoring health and fitness and as such are rapidly growing in the consumer electronics industry. However, a major drawback of these devices is that they are integrated with small batteries, which fail to deliver sufficient amount of power required for data transmission.

According to the research team, one way to resolve this issue is to develop supercapacitors, which are capable of storing and releasing short bursts of electrical power required to transmit data from smartphones, computers, heart-rate monitors, and other wearable devices. supercapacitors can also prove useful for other applications where short bursts of high power is required, for instance autonomous microrobots.

A July 7, 2015 MIT news release provides more detail about the research,

The new approach uses yarns, made from nanowires of the element niobium, as the electrodes in tiny supercapacitors (which are essentially pairs of electrically conducting fibers with an insulator between). The concept is described in a paper in the journal ACS Applied Materials and Interfaces by MIT professor of mechanical engineering Ian W. Hunter, doctoral student Seyed M. Mirvakili, and three others at the University of British Columbia.

Nanotechnology researchers have been working to increase the performance of supercapacitors for the past decade. Among nanomaterials, carbon-based nanoparticles — such as carbon nanotubes and graphene — have shown promising results, but they suffer from relatively low electrical conductivity, Mirvakili says.

In this new work, he and his colleagues have shown that desirable characteristics for such devices, such as high power density, are not unique to carbon-based nanoparticles, and that niobium nanowire yarn is a promising an alternative.

“Imagine you’ve got some kind of wearable health-monitoring system,” Hunter says, “and it needs to broadcast data, for example using Wi-Fi, over a long distance.” At the moment, the coin-sized batteries used in many small electronic devices have very limited ability to deliver a lot of power at once, which is what such data transmissions need.

“Long-distance Wi-Fi requires a fair amount of power,” says Hunter, the George N. Hatsopoulos Professor in Thermodynamics in MIT’s Department of Mechanical Engineering, “but it may not be needed for very long.” Small batteries are generally poorly suited for such power needs, he adds.

“We know it’s a problem experienced by a number of companies in the health-monitoring or exercise-monitoring space. So an alternative is to go to a combination of a battery and a capacitor,” Hunter says: the battery for long-term, low-power functions, and the capacitor for short bursts of high power. Such a combination should be able to either increase the range of the device, or — perhaps more important in the marketplace — to significantly reduce size requirements.

The new nanowire-based supercapacitor exceeds the performance of existing batteries, while occupying a very small volume. “If you’ve got an Apple Watch and I shave 30 percent off the mass, you may not even notice,” Hunter says. “But if you reduce the volume by 30 percent, that would be a big deal,” he says: Consumers are very sensitive to the size of wearable devices.

The innovation is especially significant for small devices, Hunter says, because other energy-storage technologies — such as fuel cells, batteries, and flywheels — tend to be less efficient, or simply too complex to be practical when reduced to very small sizes. “We are in a sweet spot,” he says, with a technology that can deliver big bursts of power from a very small device.

Ideally, Hunter says, it would be desirable to have a high volumetric power density (the amount of power stored in a given volume) and high volumetric energy density (the amount of energy in a given volume). “Nobody’s figured out how to do that,” he says. However, with the new device, “We have fairly high volumetric power density, medium energy density, and a low cost,” a combination that could be well suited for many applications.

Niobium is a fairly abundant and widely used material, Mirvakili says, so the whole system should be inexpensive and easy to produce. “The fabrication cost is cheap,” he says. Other groups have made similar supercapacitors using carbon nanotubes or other materials, but the niobium yarns are stronger and 100 times more conductive. Overall, niobium-based supercapacitors can store up to five times as much power in a given volume as carbon nanotube versions.

Niobium also has a very high melting point — nearly 2,500 degrees Celsius — so devices made from these nanowires could potentially be suitable for use in high-temperature applications.

In addition, the material is highly flexible and could be woven into fabrics, enabling wearable forms; individual niobium nanowires are just 140 nanometers in diameter — 140 billionths of a meter across, or about one-thousandth the width of a human hair.

So far, the material has been produced only in lab-scale devices. The next step, already under way, is to figure out how to design a practical, easily manufactured version, the researchers say.

“The work is very significant in the development of smart fabrics and future wearable technologies,” says Geoff Spinks, a professor of engineering at the University of Wollongong, in Australia, who was not associated with this research. This paper, he adds, “convincingly demonstrates the impressive performance of niobium-based fiber supercapacitors.”

Here’s a link to and a citation for the paper,

High-Performance Supercapacitors from Niobium Nanowire Yarns by Seyed M. Mirvakili, Mehr Negar Mirvakili, Peter Englezos, John D. W. Madden, and Ian W. Hunter. ACS Appl. Mater. Interfaces, 2015, 7 (25), pp 13882–13888 DOI: 10.1021/acsami.5b02327 Publication Date (Web): June 12, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

LiquiGlide, a nanotechnology-enabled coating for food packaging and oil and gas pipelines

Getting condiments out of their bottles should be a lot easier in several European countries in the near future. A June 30, 2015 news item on Nanowerk describes the technology and the business deal (Note: A link has been removed),

The days of wasting condiments — and other products — that stick stubbornly to the sides of their bottles may be gone, thanks to MIT [Massachusetts Institute of Technology] spinout LiquiGlide, which has licensed its nonstick coating to a major consumer-goods company.

Developed in 2009 by MIT’s Kripa Varanasi and David Smith, LiquiGlide is a liquid-impregnated coating that acts as a slippery barrier between a surface and a viscous liquid. Applied inside a condiment bottle, for instance, the coating clings permanently to its sides, while allowing the condiment to glide off completely, with no residue.

In 2012, amidst a flurry of media attention following LiquiGlide’s entry in MIT’s $100K Entrepreneurship Competition, Smith and Varanasi founded the startup — with help from the Institute — to commercialize the coating.

Today [June 30, 2015], Norwegian consumer-goods producer Orkla has signed a licensing agreement to use the LiquiGlide’s coating for mayonnaise products sold in Germany, Scandinavia, and several other European nations. This comes on the heels of another licensing deal, with Elmer’s [Elmer’s Glue & Adhesives], announced in March [2015].

A June 30, 2015 MIT news release, which originated the news item, provides more details about the researcher/entrepreneurs’ plans,

But this is only the beginning, says Varanasi, an associate professor of mechanical engineering who is now on LiquiGlide’s board of directors and chief science advisor. The startup, which just entered the consumer-goods market, is courting deals with numerous producers of foods, beauty supplies, and household products. “Our coatings can work with a whole range of products, because we can tailor each coating to meet the specific requirements of each application,” Varanasi says.

Apart from providing savings and convenience, LiquiGlide aims to reduce the surprising amount of wasted products — especially food — that stick to container sides and get tossed. For instance, in 2009 Consumer Reports found that up to 15 percent of bottled condiments are ultimately thrown away. Keeping bottles clean, Varanasi adds, could also drastically cut the use of water and energy, as well as the costs associated with rinsing bottles before recycling. “It has huge potential in terms of critical sustainability,” he says.

Varanasi says LiquiGlide aims next to tackle buildup in oil and gas pipelines, which can cause corrosion and clogs that reduce flow. [emphasis mine] Future uses, he adds, could include coatings for medical devices such as catheters, deicing roofs and airplane wings, and improving manufacturing and process efficiency. “Interfaces are ubiquitous,” he says. “We want to be everywhere.”

The news release goes on to describe the research process in more detail and offers a plug for MIT’s innovation efforts,

LiquiGlide was originally developed while Smith worked on his graduate research in Varanasi’s research group. Smith and Varanasi were interested in preventing ice buildup on airplane surfaces and methane hydrate buildup in oil and gas pipelines.

Some initial work was on superhydrophobic surfaces, which trap pockets of air and naturally repel water. But both researchers found that these surfaces don’t, in fact, shed every bit of liquid. During phase transitions — when vapor turns to liquid, for instance — water droplets condense within microscopic gaps on surfaces, and steadily accumulate. This leads to loss of anti-icing properties of the surface. “Something that is nonwetting to macroscopic drops does not remain nonwetting for microscopic drops,” Varanasi says.

Inspired by the work of researcher David Quéré, of ESPCI in Paris, on slippery “hemisolid-hemiliquid” surfaces, Varanasi and Smith invented permanently wet “liquid-impregnated surfaces” — coatings that don’t have such microscopic gaps. The coatings consist of textured solid material that traps a liquid lubricant through capillary and intermolecular forces. The coating wicks through the textured solid surface, clinging permanently under the product, allowing the product to slide off the surface easily; other materials can’t enter the gaps or displace the coating. “One can say that it’s a self-lubricating surface,” Varanasi says.

Mixing and matching the materials, however, is a complicated process, Varanasi says. Liquid components of the coating, for instance, must be compatible with the chemical and physical properties of the sticky product, and generally immiscible. The solid material must form a textured structure while adhering to the container. And the coating can’t spoil the contents: Foodstuffs, for instance, require safe, edible materials, such as plants and insoluble fibers.

To help choose ingredients, Smith and Varanasi developed the basic scientific principles and algorithms that calculate how the liquid and solid coating materials, and the product, as well as the geometry of the surface structures will all interact to find the optimal “recipe.”

Today, LiquiGlide develops coatings for clients and licenses the recipes to them. Included are instructions that detail the materials, equipment, and process required to create and apply the coating for their specific needs. “The state of the coating we end up with depends entirely on the properties of the product you want to slide over the surface,” says Smith, now LiquiGlide’s CEO.

Having researched materials for hundreds of different viscous liquids over the years — from peanut butter to crude oil to blood — LiquiGlide also has a database of optimal ingredients for its algorithms to pull from when customizing recipes. “Given any new product you want LiquiGlide for, we can zero in on a solution that meets all requirements necessary,” Varanasi says.

MIT: A lab for entrepreneurs

For years, Smith and Varanasi toyed around with commercial applications for LiquiGlide. But in 2012, with help from MIT’s entrepreneurial ecosystem, LiquiGlide went from lab to market in a matter of months.

Initially the idea was to bring coatings to the oil and gas industry. But one day, in early 2012, Varanasi saw his wife struggling to pour honey from its container. “And I thought, ‘We have a solution for that,’” Varanasi says.

The focus then became consumer packaging. Smith and Varanasi took the idea through several entrepreneurship classes — such as 6.933 (Entrepreneurship in Engineering: The Founder’s Journey) — and MIT’s Venture Mentoring Service and Innovation Teams, where student teams research the commercial potential of MIT technologies.

“I did pretty much every last thing you could do,” Smith says. “Because we have such a brilliant network here at MIT, I thought I should take advantage of it.”

That May [2012], Smith, Varanasi, and several MIT students entered LiquiGlide in the MIT $100K Entrepreneurship Competition, earning the Audience Choice Award — and the national spotlight. A video of ketchup sliding out of a LiquiGlide-coated bottle went viral. Numerous media outlets picked up the story, while hundreds of companies reached out to Varanasi to buy the coating. “My phone didn’t stop ringing, my website crashed for a month,” Varanasi says. “It just went crazy.”

That summer [2012], Smith and Varanasi took their startup idea to MIT’s Global Founders’ Skills Accelerator program, which introduced them to a robust network of local investors and helped them build a solid business plan. Soon after, they raised money from family and friends, and won $100,000 at the MassChallenge Entrepreneurship Competition.

When LiquiGlide Inc. launched in August 2012, clients were already knocking down the door. The startup chose a select number to pay for the development and testing of the coating for its products. Within a year, LiquiGlide was cash-flow positive, and had grown from three to 18 employees in its current Cambridge headquarters.

Looking back, Varanasi attributes much of LiquiGlide’s success to MIT’s innovation-based ecosystem, which promotes rapid prototyping for the marketplace through experimentation and collaboration. This ecosystem includes the Deshpande Center for Technological Innovation, the Martin Trust Center for MIT Entrepreneurship, the Venture Mentoring Service, and the Technology Licensing Office, among other initiatives. “Having a lab where we could think about … translating the technology to real-world applications, and having this ability to meet people, and bounce ideas … that whole MIT ecosystem was key,” Varanasi says.

Here’s the latest LiquiGlide video,


Video: Melanie Gonick/MIT
Additional footage courtesy of LiquiGlide™
Music sampled from “Candlepower” by Chris Zabriskie

I had thought the EU (European Union) offered more roadblocks to marketing nanotechnology-enabled products used in food packaging than the US. If anyone knows why a US company would market its products in Europe first I would love to find out.

Solar-powered sensors to power the Internet of Things?

As a June 23, 2015 news item on Nanowerk notes, the ‘nternet of things’, will need lots and lots of power,

The latest buzz in the information technology industry regards “the Internet of things” — the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock would have their own embedded sensors that report information directly to networked servers, aiding with maintenance and the coordination of tasks.

Realizing that vision, however, will require extremely low-power sensors that can run for months without battery changes — or, even better, that can extract energy from the environment to recharge.

Last week, at the Symposia on VLSI Technology and Circuits, MIT [Massachusetts Institute of Technology] researchers presented a new power converter chip that can harvest more than 80 percent of the energy trickling into it, even at the extremely low power levels characteristic of tiny solar cells. [emphasis mine] Previous experimental ultralow-power converters had efficiencies of only 40 or 50 percent.

A June 22, 2015 MIT news release (also on EurekAlert), which originated the news item, describes some additional capabilities,

Moreover, the researchers’ chip achieves those efficiency improvements while assuming additional responsibilities. Where its predecessors could use a solar cell to either charge a battery or directly power a device, this new chip can do both, and it can power the device directly from the battery.

All of those operations also share a single inductor — the chip’s main electrical component — which saves on circuit board space but increases the circuit complexity even further. Nonetheless, the chip’s power consumption remains low.

“We still want to have battery-charging capability, and we still want to provide a regulated output voltage,” says Dina Reda El-Damak, an MIT graduate student in electrical engineering and computer science and first author on the new paper. “We need to regulate the input to extract the maximum power, and we really want to do all these tasks with inductor sharing and see which operational mode is the best. And we want to do it without compromising the performance, at very limited input power levels — 10 nanowatts to 1 microwatt — for the Internet of things.”

The prototype chip was manufactured through the Taiwan Semiconductor Manufacturing Company’s University Shuttle Program.

The MIT news release goes on to describe chip specifics,

The circuit’s chief function is to regulate the voltages between the solar cell, the battery, and the device the cell is powering. If the battery operates for too long at a voltage that’s either too high or too low, for instance, its chemical reactants break down, and it loses the ability to hold a charge.

To control the current flow across their chip, El-Damak and her advisor, Anantha Chandrakasan, the Joseph F. and Nancy P. Keithley Professor in Electrical Engineering, use an inductor, which is a wire wound into a coil. When a current passes through an inductor, it generates a magnetic field, which in turn resists any change in the current.

Throwing switches in the inductor’s path causes it to alternately charge and discharge, so that the current flowing through it continuously ramps up and then drops back down to zero. Keeping a lid on the current improves the circuit’s efficiency, since the rate at which it dissipates energy as heat is proportional to the square of the current.

Once the current drops to zero, however, the switches in the inductor’s path need to be thrown immediately; otherwise, current could begin to flow through the circuit in the wrong direction, which would drastically diminish its efficiency. The complication is that the rate at which the current rises and falls depends on the voltage generated by the solar cell, which is highly variable. So the timing of the switch throws has to vary, too.

Electric hourglass

To control the switches’ timing, El-Damak and Chandrakasan use an electrical component called a capacitor, which can store electrical charge. The higher the current, the more rapidly the capacitor fills. When it’s full, the circuit stops charging the inductor.

The rate at which the current drops off, however, depends on the output voltage, whose regulation is the very purpose of the chip. Since that voltage is fixed, the variation in timing has to come from variation in capacitance. El-Damak and Chandrakasan thus equip their chip with a bank of capacitors of different sizes. As the current drops, it charges a subset of those capacitors, whose selection is determined by the solar cell’s voltage. Once again, when the capacitor fills, the switches in the inductor’s path are flipped.

“In this technology space, there’s usually a trend to lower efficiency as the power gets lower, because there’s a fixed amount of energy that’s consumed by doing the work,” says Brett Miwa, who leads a power conversion development project as a fellow at the chip manufacturer Maxim Integrated. “If you’re only coming in with a small amount, it’s hard to get most of it out, because you lose more as a percentage. [El-Damak’s] design is unusually efficient for how low a power level she’s at.”

“One of the things that’s most notable about it is that it’s really a fairly complete system,” he adds. “It’s really kind of a full system-on-a chip for power management. And that makes it a little more complicated, a little bit larger, and a little bit more comprehensive than some of the other designs that might be reported in the literature. So for her to still achieve these high-performance specs in a much more sophisticated system is also noteworthy.”

I wonder how close they are to commercializing this chip (see below),

The MIT researchers' prototype for a chip measuring 3 millimeters by 3 millimeters. The magnified detail shows the chip's main control circuitry, including the startup electronics; the controller that determines whether to charge the battery, power a device, or both; and the array of switches that control current flow to an external inductor coil. This active area measures just 2.2 millimeters by 1.1 millimeters. (click on image to enlarge) Read more: Toward tiny, solar-powered sensors. Courtesy: MIT

The MIT researchers’ prototype for a chip measuring 3 millimeters by 3 millimeters. The magnified detail shows the chip’s main control circuitry, including the startup electronics; the controller that determines whether to charge the battery, power a device, or both; and the array of switches that control current flow to an external inductor coil. This active area measures just 2.2 millimeters by 1.1 millimeters. (click on image to enlarge)
Courtesy: MIT

I sing the body cyber: two projects funded by the US National Science Foundation

Points to anyone who recognized the reference to Walt Whitman’s poem, “I sing the body electric,” from his classic collection, Leaves of Grass (1867 edition; h/t Wikipedia entry). I wonder if the cyber physical systems (CPS) work being funded by the US National Science Foundation (NSF) in the US will occasion poetry too.

More practically, a May 15, 2015 news item on Nanowerk, describes two cyber physical systems (CPS) research projects newly funded by the NSF,

Today [May 12, 2015] the National Science Foundation (NSF) announced two, five-year, center-scale awards totaling $8.75 million to advance the state-of-the-art in medical and cyber-physical systems (CPS).

One project will develop “Cyberheart”–a platform for virtual, patient-specific human heart models and associated device therapies that can be used to improve and accelerate medical-device development and testing. The other project will combine teams of microrobots with synthetic cells to perform functions that may one day lead to tissue and organ re-generation.

CPS are engineered systems that are built from, and depend upon, the seamless integration of computation and physical components. Often called the “Internet of Things,” CPS enable capabilities that go beyond the embedded systems of today.

“NSF has been a leader in supporting research in cyber-physical systems, which has provided a foundation for putting the ‘smart’ in health, transportation, energy and infrastructure systems,” said Jim Kurose, head of Computer & Information Science & Engineering at NSF. “We look forward to the results of these two new awards, which paint a new and compelling vision for what’s possible for smart health.”

Cyber-physical systems have the potential to benefit many sectors of our society, including healthcare. While advances in sensors and wearable devices have the capacity to improve aspects of medical care, from disease prevention to emergency response, and synthetic biology and robotics hold the promise of regenerating and maintaining the body in radical new ways, little is known about how advances in CPS can integrate these technologies to improve health outcomes.

These new NSF-funded projects will investigate two very different ways that CPS can be used in the biological and medical realms.

A May 12, 2015 NSF news release (also on EurekAlert), which originated the news item, describes the two CPS projects,

Bio-CPS for engineering living cells

A team of leading computer scientists, roboticists and biologists from Boston University, the University of Pennsylvania and MIT have come together to develop a system that combines the capabilities of nano-scale robots with specially designed synthetic organisms. Together, they believe this hybrid “bio-CPS” will be capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.

“We bring together synthetic biology and micron-scale robotics to engineer the emergence of desired behaviors in populations of bacterial and mammalian cells,” said Calin Belta, a professor of mechanical engineering, systems engineering and bioinformatics at Boston University and principal investigator on the project. “This project will impact several application areas ranging from tissue engineering to drug development.”

The project builds on previous research by each team member in diverse disciplines and early proof-of-concept designs of bio-CPS. According to the team, the research is also driven by recent advances in the emerging field of synthetic biology, in particular the ability to rapidly incorporate new capabilities into simple cells. Researchers so far have not been able to control and coordinate the behavior of synthetic cells in isolation, but the introduction of microrobots that can be externally controlled may be transformative.

In this new project, the team will focus on bio-CPS with the ability to sense, transport and work together. As a demonstration of their idea, they will develop teams of synthetic cell/microrobot hybrids capable of constructing a complex, fabric-like surface.

Vijay Kumar (University of Pennsylvania), Ron Weiss (MIT), and Douglas Densmore (BU) are co-investigators of the project.

Medical-CPS and the ‘Cyberheart’

CPS such as wearable sensors and implantable devices are already being used to assess health, improve quality of life, provide cost-effective care and potentially speed up disease diagnosis and prevention. [emphasis mine]

Extending these efforts, researchers from seven leading universities and centers are working together to develop far more realistic cardiac and device models than currently exist. This so-called “Cyberheart” platform can be used to test and validate medical devices faster and at a far lower cost than existing methods. CyberHeart also can be used to design safe, patient-specific device therapies, thereby lowering the risk to the patient.

“Innovative ‘virtual’ design methodologies for implantable cardiac medical devices will speed device development and yield safer, more effective devices and device-based therapies, than is currently possible,” said Scott Smolka, a professor of computer science at Stony Brook University and one of the principal investigators on the award.

The group’s approach combines patient-specific computational models of heart dynamics with advanced mathematical techniques for analyzing how these models interact with medical devices. The analytical techniques can be used to detect potential flaws in device behavior early on during the device-design phase, before animal and human trials begin. They also can be used in a clinical setting to optimize device settings on a patient-by-patient basis before devices are implanted.

“We believe that our coordinated, multi-disciplinary approach, which balances theoretical, experimental and practical concerns, will yield transformational results in medical-device design and foundations of cyber-physical system verification,” Smolka said.

The team will develop virtual device models which can be coupled together with virtual heart models to realize a full virtual development platform that can be subjected to computational analysis and simulation techniques. Moreover, they are working with experimentalists who will study the behavior of virtual and actual devices on animals’ hearts.

Co-investigators on the project include Edmund Clarke (Carnegie Mellon University), Elizabeth Cherry (Rochester Institute of Technology), W. Rance Cleaveland (University of Maryland), Flavio Fenton (Georgia Tech), Rahul Mangharam (University of Pennsylvania), Arnab Ray (Fraunhofer Center for Experimental Software Engineering [Germany]) and James Glimm and Radu Grosu (Stony Brook University). Richard A. Gray of the U.S. Food and Drug Administration is another key contributor.

It is fascinating to observe how terminology is shifting from pacemakers and deep brain stimulators as implants to “CPS such as wearable sensors and implantable devices … .” A new category has been created, CPS, which conjoins medical devices with other sensing devices such as wearable fitness monitors found in the consumer market. I imagine it’s an attempt to quell fears about injecting strange things into or adding strange things to your body—microrobots and nanorobots partially derived from synthetic biology research which are “… capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.” They’ve also sneaked in a reference to synthetic biology, an area of research where some concerns have been expressed, from my March 19, 2013 post about a poll and synthetic biology concerns,

In our latest survey, conducted in January 2013, three-fourths of respondents say they have heard little or nothing about synthetic biology, a level consistent with that measured in 2010. While initial impressions about the science are largely undefined, these feelings do not necessarily become more positive as respondents learn more. The public has mixed reactions to specific synthetic biology applications, and almost one-third of respondents favor a ban “on synthetic biology research until we better understand its implications and risks,” while 61 percent think the science should move forward.

I imagine that for scientists, 61% in favour of more research is not particularly comforting given how easily and quickly public opinion can shift.

Sealing graphene’s defects to make a better filtration device

Making a graphene filter that allows water to pass through while screening out salt and/or noxious materials has been more challenging than one might think. According to a May 7, 2015 news item on Nanowerk, graphene filters can be ‘leaky’,

For faster, longer-lasting water filters, some scientists are looking to graphene –thin, strong sheets of carbon — to serve as ultrathin membranes, filtering out contaminants to quickly purify high volumes of water.

Graphene’s unique properties make it a potentially ideal membrane for water filtration or desalination. But there’s been one main drawback to its wider use: Making membranes in one-atom-thick layers of graphene is a meticulous process that can tear the thin material — creating defects through which contaminants can leak.

Now engineers at MIT [Massachusetts Institute of Technology], Oak Ridge National Laboratory, and King Fahd University of Petroleum and Minerals (KFUPM) have devised a process to repair these leaks, filling cracks and plugging holes using a combination of chemical deposition and polymerization techniques. The team then used a process it developed previously to create tiny, uniform pores in the material, small enough to allow only water to pass through.

A May 8, 2015 MIT news release (also on EurkeAlert), which originated the news item, expands on the theme,

Combining these two techniques, the researchers were able to engineer a relatively large defect-free graphene membrane — about the size of a penny. The membrane’s size is significant: To be exploited as a filtration membrane, graphene would have to be manufactured at a scale of centimeters, or larger.

In experiments, the researchers pumped water through a graphene membrane treated with both defect-sealing and pore-producing processes, and found that water flowed through at rates comparable to current desalination membranes. The graphene was able to filter out most large-molecule contaminants, such as magnesium sulfate and dextran.

Rohit Karnik, an associate professor of mechanical engineering at MIT, says the group’s results, published in the journal Nano Letters, represent the first success in plugging graphene’s leaks.

“We’ve been able to seal defects, at least on the lab scale, to realize molecular filtration across a macroscopic area of graphene, which has not been possible before,” Karnik says. “If we have better process control, maybe in the future we don’t even need defect sealing. But I think it’s very unlikely that we’ll ever have perfect graphene — there will always be some need to control leakages. These two [techniques] are examples which enable filtration.”

Sean O’Hern, a former graduate research assistant at MIT, is the paper’s first author. Other contributors include MIT graduate student Doojoon Jang, former graduate student Suman Bose, and Professor Jing Kong.

A delicate transfer

“The current types of membranes that can produce freshwater from saltwater are fairly thick, on the order of 200 nanometers,” O’Hern says. “The benefit of a graphene membrane is, instead of being hundreds of nanometers thick, we’re on the order of three angstroms — 600 times thinner than existing membranes. This enables you to have a higher flow rate over the same area.”

O’Hern and Karnik have been investigating graphene’s potential as a filtration membrane for the past several years. In 2009, the group began fabricating membranes from graphene grown on copper — a metal that supports the growth of graphene across relatively large areas. However, copper is impermeable, requiring the group to transfer the graphene to a porous substrate following fabrication.

However, O’Hern noticed that this transfer process would create tears in graphene. What’s more, he observed intrinsic defects created during the growth process, resulting perhaps from impurities in the original material.

Plugging graphene’s leaks

To plug graphene’s leaks, the team came up with a technique to first tackle the smaller intrinsic defects, then the larger transfer-induced defects. For the intrinsic defects, the researchers used a process called “atomic layer deposition,” placing the graphene membrane in a vacuum chamber, then pulsing in a hafnium-containing chemical that does not normally interact with graphene. However, if the chemical comes in contact with a small opening in graphene, it will tend to stick to that opening, attracted by the area’s higher surface energy.

The team applied several rounds of atomic layer deposition, finding that the deposited hafnium oxide successfully filled in graphene’s nanometer-scale intrinsic defects. However, O’Hern realized that using the same process to fill in much larger holes and tears — on the order of hundreds of nanometers — would require too much time.

Instead, he and his colleagues came up with a second technique to fill in larger defects, using a process called “interfacial polymerization” that is often employed in membrane synthesis. After they filled in graphene’s intrinsic defects, the researchers submerged the membrane at the interface of two solutions: a water bath and an organic solvent that, like oil, does not mix with water.

In the two solutions, the researchers dissolved two different molecules that can react to form nylon. Once O’Hern placed the graphene membrane at the interface of the two solutions, he observed that nylon plugs formed only in tears and holes — regions where the two molecules could come in contact because of tears in the otherwise impermeable graphene — effectively sealing the remaining defects.

Using a technique they developed last year, the researchers then etched tiny, uniform holes in graphene — small enough to let water molecules through, but not larger contaminants. In experiments, the group tested the membrane with water containing several different molecules, including salt, and found that the membrane rejected up to 90 percent of larger molecules. However, it let salt through at a faster rate than water.

The preliminary tests suggest that graphene may be a viable alternative to existing filtration membranes, although Karnik says techniques to seal its defects and control its permeability will need further improvements.

“Water desalination and nanofiltration are big applications where, if things work out and this technology withstands the different demands of real-world tests, it would have a large impact,” Karnik says. “But one could also imagine applications for fine chemical- or biological-sample processing, where these membranes could be useful. And this is the first report of a centimeter-scale graphene membrane that does any kind of molecular filtration. That’s exciting.”

De-en Jiang, an assistant professor of chemistry at the University of California at Riverside, sees the defect-sealing technique as “a great advance toward making graphene filtration a reality.”

“The two-step technique is very smart: sealing the defects while preserving the desired pores for filtration,” says Jiang, who did not contribute to the research. “This would make the scale-up much easier. One can produce a large graphene membrane first, not worrying about the defects, which can be sealed later.”

I have featured graphene and water desalination work before  from these researchers at MIT in a Feb. 27, 2014 posting. Interestingly, there was no mention of problems with defects in the news release highlighting this previous work.

Here’s a link to and a citation for the latest paper,

Nanofiltration across Defect-Sealed Nanoporous Monolayer Graphene by Sean C. O’Hern, Doojoon Jang, Suman Bose, Juan-Carlos Idrobo, Yi Song §, Tahar Laoui, Jing Kong, and Rohit Karnik. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.5b00456 Publication Date (Web): April 27, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

Carbon nanotubes sense spoiled food


Courtesy: MIT (Massachusetts Institute of Technology)

I love this .gif; it says a lot without a word. However for details, you need words and here’s what an April 15, 2015 news item on Nanowerk has to say about the research illustrated by the .gif,

MIT [Massachusetts Institute of Technology] chemists have devised an inexpensive, portable sensor that can detect gases emitted by rotting meat, allowing consumers to determine whether the meat in their grocery store or refrigerator is safe to eat.

The sensor, which consists of chemically modified carbon nanotubes, could be deployed in “smart packaging” that would offer much more accurate safety information than the expiration date on the package, says Timothy Swager, the John D. MacArthur Professor of Chemistry at MIT.

An April 14, 2015 MIT news release (also on EurekAlert), which originated the news item, offers more from Dr. Swager,

It could also cut down on food waste, he adds. “People are constantly throwing things out that probably aren’t bad,” says Swager, who is the senior author of a paper describing the new sensor this week in the journal Angewandte Chemie.

This latest study is builds on previous work at Swager’s lab (Note: Links have been removed),

The sensor is similar to other carbon nanotube devices that Swager’s lab has developed in recent years, including one that detects the ripeness of fruit. All of these devices work on the same principle: Carbon nanotubes can be chemically modified so that their ability to carry an electric current changes in the presence of a particular gas.

In this case, the researchers modified the carbon nanotubes with metal-containing compounds called metalloporphyrins, which contain a central metal atom bound to several nitrogen-containing rings. Hemoglobin, which carries oxygen in the blood, is a metalloporphyrin with iron as the central atom.

For this sensor, the researchers used a metalloporphyrin with cobalt at its center. Metalloporphyrins are very good at binding to nitrogen-containing compounds called amines. Of particular interest to the researchers were the so-called biogenic amines, such as putrescine and cadaverine, which are produced by decaying meat.

When the cobalt-containing porphyrin binds to any of these amines, it increases the electrical resistance of the carbon nanotube, which can be easily measured.

“We use these porphyrins to fabricate a very simple device where we apply a potential across the device and then monitor the current. When the device encounters amines, which are markers of decaying meat, the current of the device will become lower,” Liu says.

In this study, the researchers tested the sensor on four types of meat: pork, chicken, cod, and salmon. They found that when refrigerated, all four types stayed fresh over four days. Left unrefrigerated, the samples all decayed, but at varying rates.

There are other sensors that can detect the signs of decaying meat, but they are usually large and expensive instruments that require expertise to operate. “The advantage we have is these are the cheapest, smallest, easiest-to-manufacture sensors,” Swager says.

“There are several potential advantages in having an inexpensive sensor for measuring, in real time, the freshness of meat and fish products, including preventing foodborne illness, increasing overall customer satisfaction, and reducing food waste at grocery stores and in consumers’ homes,” says Roberto Forloni, a senior science fellow at Sealed Air, a major supplier of food packaging, who was not part of the research team.

The new device also requires very little power and could be incorporated into a wireless platform Swager’s lab recently developed that allows a regular smartphone to read output from carbon nanotube sensors such as this one.

The funding sources are interesting, as I am appreciating with increasing frequency these days (from the news release),

The researchers have filed for a patent on the technology and hope to license it for commercial development. The research was funded by the National Science Foundation and the Army Research Office through MIT’s Institute for Soldier Nanotechnologies.

Here’s a link to and a citation for the paper,

Single-Walled Carbon Nanotube/Metalloporphyrin Composites for the Chemiresistive Detection of Amines and Meat Spoilage by Sophie F. Liu, Alexander R. Petty, Dr. Graham T. Sazama, and Timothy M. Swager. Angewandte Chemie International Edition DOI: 10.1002/anie.201501434 Article first published online: 13 APR 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This article is behind a paywall.

There are other posts here about the quest to create food sensors including this Sept. 26, 2013 piece which features a critique (by another blogger) about trying to create food sensors that may be more expensive than the item they are protecting, a problem Swager claims to have overcome in an April 17, 2015 article by Ben Schiller for Fast Company (Note: Links have been removed),

Swager has set up a company to commercialize the technology and he expects to do the first demonstrations to interested clients this summer. The first applications are likely to be for food workers working with meat and fish, but there’s no reason why consumers shouldn’t get their own devices in due time.

There are efforts to create visual clues for food status. But Swager says his method is better because it doesn’t rely on perception: it produces hard data that can be logged and tracked. And it also has potential to be very cheap.

“The resistance method is a game-changer because it’s two to three orders of magnitude cheaper than other technology. It’s hard to imagine doing this cheaper,” he says.

Taking the baking out of aircraft manufacture

It seems that ovens are an essential piece of equipment when manufacturing aircraft parts but that may change if research from MIT (Massachusetts Institute of Technology) proves successful. An April 14, 2015 news item on ScienceDaily describes the current process and the MIT research,

Composite materials used in aircraft wings and fuselages are typically manufactured in large, industrial-sized ovens: Multiple polymer layers are blasted with temperatures up to 750 degrees Fahrenheit, and solidified to form a solid, resilient material. Using this approach, considerable energy is required first to heat the oven, then the gas around it, and finally the actual composite.

Aerospace engineers at MIT have now developed a carbon nanotube (CNT) film that can heat and solidify a composite without the need for massive ovens. When connected to an electrical power source, and wrapped over a multilayer polymer composite, the heated film stimulates the polymer to solidify.

The group tested the film on a common carbon-fiber material used in aircraft components, and found that the film created a composite as strong as that manufactured in conventional ovens — while using only 1 percent of the energy.

The new “out-of-oven” approach may offer a more direct, energy-saving method for manufacturing virtually any industrial composite, says Brian L. Wardle, an associate professor of aeronautics and astronautics at MIT.

“Typically, if you’re going to cook a fuselage for an Airbus A350 or Boeing 787, you’ve got about a four-story oven that’s tens of millions of dollars in infrastructure that you don’t need,” Wardle says. “Our technique puts the heat where it is needed, in direct contact with the part being assembled. Think of it as a self-heating pizza. … Instead of an oven, you just plug the pizza into the wall and it cooks itself.”

Wardle says the carbon nanotube film is also incredibly lightweight: After it has fused the underlying polymer layers, the film itself — a fraction of a human hair’s diameter — meshes with the composite, adding negligible weight.

An April 14, 2015 MIT news release, which originated the news item, describes the origins of the team’s latest research, the findings, and the implications,

Carbon nanotube deicers

Wardle and his colleagues have experimented with CNT films in recent years, mainly for deicing airplane wings. The team recognized that in addition to their negligible weight, carbon nanotubes heat efficiently when exposed to an electric current.

The group first developed a technique to create a film of aligned carbon nanotubes composed of tiny tubes of crystalline carbon, standing upright like trees in a forest. The researchers used a rod to roll the “forest” flat, creating a dense film of aligned carbon nanotubes.

In experiments, Wardle and his team integrated the film into airplane wings via conventional, oven-based curing methods, showing that when voltage was applied, the film generated heat, preventing ice from forming.

The deicing tests inspired a question: If the CNT film could generate heat, why not use it to make the composite itself?

How hot can you go?

In initial experiments, the researchers investigated the film’s potential to fuse two types of aerospace-grade composite typically used in aircraft wings and fuselages. Normally the material, composed of about 16 layers, is solidified, or cross-linked, in a high-temperature industrial oven.

The researchers manufactured a CNT film about the size of a Post-It note, and placed the film over a square of Cycom 5320-1. They connected electrodes to the film, then applied a current to heat both the film and the underlying polymer in the Cycom composite layers.

The team measured the energy required to solidify, or cross-link, the polymer and carbon fiber layers, finding that the CNT film used one-hundredth the electricity required for traditional oven-based methods to cure the composite. Both methods generated composites with similar properties, such as cross-linking density.

Wardle says the results pushed the group to test the CNT film further: As different composites require different temperatures in order to fuse, the researchers looked to see whether the CNT film could, quite literally, take the heat.

“At some point, heaters fry out,” Wardle says. “They oxidize, or have different ways in which they fail. What we wanted to see was how hot could this material go.”

To do this, the group tested the film’s ability to generate higher and higher temperatures, and found it topped out at over 1,000 F. In comparison, some of the highest-temperature aerospace polymers require temperatures up to 750 F in order to solidify.

“We can process at those temperatures, which means there’s no composite we can’t process,” Wardle says. “This really opens up all polymeric materials to this technology.”

The team is working with industrial partners to find ways to scale up the technology to manufacture composites large enough to make airplane fuselages and wings.

“There needs to be some thought given to electroding, and how you’re going to actually make the electrical contact efficiently over very large areas,” Wardle says. “You’d need much less power than you are currently putting into your oven. I don’t think it’s a challenge, but it has to be done.”

Gregory Odegard, a professor of computational mechanics at Michigan Technological University, says the group’s carbon nanotube film may go toward improving the quality and efficiency of fabrication processes for large composites, such as wings on commercial aircraft. The new technique may also open the door to smaller firms that lack access to large industrial ovens.

“Smaller companies that want to fabricate composite parts may be able to do so without investing in large ovens or outsourcing,” says Odegard, who was not involved in the research. “This could lead to more innovation in the composites sector, and perhaps improvements in the performance and usage of composite materials.”

It can be interesting to find out who funds the research (from the news release),

This research was funded in part by Airbus Group, Boeing, Embraer, Lockheed Martin, Saab AB, TohoTenax, ANSYS Inc., the Air Force Research Laboratory at Wright-Patterson Air Force Base, and the U.S. Army Research Office.

Here’s a link to and citation for the research paper,

Impact of carbon nanotube length on electron transport in aligned carbon nanotube networks by Jeonyoon Lee, Itai Y. Stein, Mackenzie E. Devoe, Diana J. Lewis, Noa Lachman, Seth S. Kessler, Samuel T. Buschhorn, and Brian L. Wardle. Appl. Phys. Lett. 106, 053110 (2015); http://dx.doi.org/10.1063/1.4907608

This paper is behind a paywall.

Entangling thousands of atoms

Quantum entanglement as an idea seems extraordinary to me like something from of the fevered imagination made possible only with certain kinds of hallucinogens. I suppose you could call theoretical physicists who’ve conceptualized entanglement a different breed as they don’t seem to need chemical assistance for their flights of fancy, which turn out to be reality. Researchers at MIT (Massachusetts Institute of Technology) and the University of Belgrade (Serbia) have entangled thousands of atoms with a single photon according to a March 26, 2015 news item on Nanotechnology Now,

Physicists from MIT and the University of Belgrade have developed a new technique that can successfully entangle 3,000 atoms using only a single photon. The results, published today in the journal Nature, represent the largest number of particles that have ever been mutually entangled experimentally.

The researchers say the technique provides a realistic method to generate large ensembles of entangled atoms, which are key components for realizing more-precise atomic clocks.

“You can make the argument that a single photon cannot possibly change the state of 3,000 atoms, but this one photon does — it builds up correlations that you didn’t have before,” says Vladan Vuletic, the Lester Wolfe Professor in MIT’s Department of Physics, and the paper’s senior author. “We have basically opened up a new class of entangled states we can make, but there are many more new classes to be explored.”

A March 26, 2015 MIT news release by Jennifer Chu (also on EurekAlert but dated March 25, 2015), which originated the news item, describes entanglement with particular attention to how it relates to atomic timekeeping,

Entanglement is a curious phenomenon: As the theory goes, two or more particles may be correlated in such a way that any change to one will simultaneously change the other, no matter how far apart they may be. For instance, if one atom in an entangled pair were somehow made to spin clockwise, the other atom would instantly be known to spin counterclockwise, even though the two may be physically separated by thousands of miles.

The phenomenon of entanglement, which physicist Albert Einstein once famously dismissed as “spooky action at a distance,” is described not by the laws of classical physics, but by quantum mechanics, which explains the interactions of particles at the nanoscale. At such minuscule scales, particles such as atoms are known to behave differently from matter at the macroscale.

Scientists have been searching for ways to entangle not just pairs, but large numbers of atoms; such ensembles could be the basis for powerful quantum computers and more-precise atomic clocks. The latter is a motivation for Vuletic’s group.

Today’s best atomic clocks are based on the natural oscillations within a cloud of trapped atoms. As the atoms oscillate, they act as a pendulum, keeping steady time. A laser beam within the clock, directed through the cloud of atoms, can detect the atoms’ vibrations, which ultimately determine the length of a single second.

“Today’s clocks are really amazing,” Vuletic says. “They would be less than a minute off if they ran since the Big Bang — that’s the stability of the best clocks that exist today. We’re hoping to get even further.”

The accuracy of atomic clocks improves as more and more atoms oscillate in a cloud. Conventional atomic clocks’ precision is proportional to the square root of the number of atoms: For example, a clock with nine times more atoms would only be three times as accurate. If these same atoms were entangled, a clock’s precision could be directly proportional to the number of atoms — in this case, nine times as accurate. The larger the number of entangled particles, then, the better an atomic clock’s timekeeping.

It seems weak lasers make big entanglements possible (from the news release),

Scientists have so far been able to entangle large groups of atoms, although most attempts have only generated entanglement between pairs in a group. Only one team has successfully entangled 100 atoms — the largest mutual entanglement to date, and only a small fraction of the whole atomic ensemble.

Now Vuletic and his colleagues have successfully created a mutual entanglement among 3,000 atoms, virtually all the atoms in the ensemble, using very weak laser light — down to pulses containing a single photon. The weaker the light, the better, Vuletic says, as it is less likely to disrupt the cloud. “The system remains in a relatively clean quantum state,” he says.

The researchers first cooled a cloud of atoms, then trapped them in a laser trap, and sent a weak laser pulse through the cloud. They then set up a detector to look for a particular photon within the beam. Vuletic reasoned that if a photon has passed through the atom cloud without event, its polarization, or direction of oscillation, would remain the same. If, however, a photon has interacted with the atoms, its polarization rotates just slightly — a sign that it was affected by quantum “noise” in the ensemble of spinning atoms, with the noise being the difference in the number of atoms spinning clockwise and counterclockwise.

“Every now and then, we observe an outgoing photon whose electric field oscillates in a direction perpendicular to that of the incoming photons,” Vuletic says. “When we detect such a photon, we know that must have been caused by the atomic ensemble, and surprisingly enough, that detection generates a very strongly entangled state of the atoms.”

Vuletic and his colleagues are currently using the single-photon detection technique to build a state-of-the-art atomic clock that they hope will overcome what’s known as the “standard quantum limit” — a limit to how accurate measurements can be in quantum systems. Vuletic says the group’s current setup may be a step toward developing even more complex entangled states.

“This particular state can improve atomic clocks by a factor of two,” Vuletic says. “We’re striving toward making even more complicated states that can go further.”

This research was supported in part by the National Science Foundation, the Defense Advanced Research Projects Agency, and the Air Force Office of Scientific Research.

Here’s a link to and a citation for the paper,

Entanglement with negative Wigner function of almost 3,000 atoms heralded by one photon by Robert McConnell, Hao Zhang, Jiazhong Hu, Senka Ćuk & Vladan Vuletić. Nature 519 439–442 (26 March 2015) doi:10.1038/nature14293 Published online 25 March 2015

This article is behind a paywall but there is a free preview via ReadCube Access.

This image illustrates the entanglement of a large number of atoms. The atoms, shown in purple, are shown mutually entangled with one another. Image: Christine Daniloff/MIT and Jose-Luis Olivares/MIT

This image illustrates the entanglement of a large number of atoms. The atoms, shown in purple, are shown mutually entangled with one another.
Image: Christine Daniloff/MIT and Jose-Luis Olivares/MIT