Tag Archives: MIT

People for the Ethical Treatment of Animals (PETA) and a grant for in vitro nanotoxicity testing

This grant seems to have gotten its start at a workshop held at the US Environmental Protection Agency (EPA) in Washington, D.C., Feb. 24-25, 2015 as per this webpage on the People for Ethical Treatment of Animals (PETA) International Science Consortium Limited website,

The invitation-only workshop included experts from different sectors (government, industry, academia and NGO) and disciplines (in vitro and in vivo inhalation studies of NMs, fibrosis, dosimetry, fluidic models, aerosol engineering, and regulatory assessment). It focused on the technical details for the development and preliminary assessment of the relevance and reliability of an in vitro test to predict the development of pulmonary fibrosis in cells co-cultured at the air-liquid interface following exposure to aerosolized multi-walled carbon nanotubes (MWCNTs). During the workshop, experts made recommendations on cell types, exposure systems, endpoints and dosimetry considerations required to develop the in vitro model for hazard identification of MWCNTs.

The method is intended to be included in a non-animal test battery to reduce and eventually replace the use of animals in studies to assess the inhalation toxicity of engineered NMs. The long-term vision is to develop a battery of in silico and in vitro assays that can be used in an integrated testing strategy, providing comprehensive information on biological endpoints relevant to inhalation exposure to NMs which could be used in the hazard ranking of substances in the risk assessment process.

A September 1, 2015 news item on Azonano provides an update,

The PETA International Science Consortium Ltd. announced today the winners of a $200,000 award for the design of an in vitro test to predict the development of lung fibrosis in humans following exposure to nanomaterials, such as multi-walled carbon nanotubes.

Professor Dr. Barbara Rothen-Rutishauser of the Adolphe Merkle Institute at the University of Fribourg, Switzerland and Professor Dr. Vicki Stone of the School of Life Sciences at Heriot-Watt University, Edinburgh, U.K. will jointly develop the test method. Professor Rothen-Rutishauser co-chairs the BioNanomaterials research group at the Adolphe Merkle Institute, where her research is focused on the study of nanomaterial-cell interactions in the lung using three-dimensional cell models. Professor Vicki Stone is the Director of the Nano Safety Research Group at Heriot-Watt University and the Director of Toxicology for SAFENANO.

The Science Consortium is also funding MatTek Corporation for the development of a three-dimensional reconstructed primary human lung tissue model to be used in Professors Rothen-Rutishauser and Stone’s work. MatTek Corporation has extensive expertise in manufacturing human cell-based, organotypic in vitro models for use in regulatory and basic research applications. The work at MatTek will be led by Dr. Patrick Hayden, Vice President of Scientific Affairs, and Dr. Anna Maione, head of MatTek’s airway models research group.

I was curious about MatTek Corporation and found this on company’s About Us webpage,

MatTek Corporation was founded in 1985 by two chemical engineering professors from MIT. In 1991 the company leveraged its core polymer surface modification technology into the emerging tissue engineering market.

MatTek Corporation is at the forefront of tissue engineering and is a world leader in the production of innovative 3D reconstructed human tissue models. Our skin, ocular, and respiratory tissue models are used in regulatory toxicology (OECD, EU guidelines) and address toxicology and efficacy concerns throughout the cosmetics, chemical, pharmaceutical and household product industries.

EpiDerm™, MatTek’s first 3D human cell based in vitro model, was introduced in 1993 and became an immediate technical and commercial success.

I wish them good luck in their research on developing better ways to test toxicity.

Carbon nanotubes as sensors in the body

Rachel Ehrenberg has written an Aug. 21, 2015 news item about the latest and greatest carbon nanotube-based biomedical sensors for the journal Nature,

The future of medical sensors may be going down the tubes. Chemists are developing tiny devices made from carbon nanotubes wrapped with polymers to detect biologically important compounds such as insulin, nitric oxide and the blood-clotting protein fibrinogen. The hope is that these sensors could simplify and automate diagnostic tests.

Preliminary experiments in mice, reported by scientists at a meeting of the American Chemical Society in Boston, Massachusetts, this week [Aug. 16 – 20, 2015], suggest that the devices are safe to introduce into the bloodstream or implant under the skin. Researchers also presented data showing that the nanotube–polymer complexes could measure levels of large molecules, a feat that has been difficult for existing technologies.

Ehrenberg focuses on one laboratory in particular (Note: Links have been removed),

“Anything the body makes, it is meant to degrade,” says chemical engineer Michael Strano, whose lab at the Massachusetts Institute of Technology (MIT) in Cambridge is behind much of the latest work1. “Our vision is to make a sensing platform that can monitor a whole range of molecules, and do it in the long term.”

To design one sensor, MIT  researchers coated nanotubes with a mix of polymers and nucleotides and screened for configurations that would bind to the protein fibrinogen. This large molecule is important for building blood clots; its concentration can indicate bleeding disorders, liver disease or impending cardiovascular trouble. The team recently hit on a material that worked — a first for such a large molecule, according to MIT nanotechnology specialist Gili Bisker. Bisker said at the chemistry meeting that the fibrinogen-detecting nanotubes could be used to measure levels of the protein in blood samples, or implanted in body tissue to detect changing fibrinogen levels that might indicate a clot.

The MIT team has also developed2 a sensor that can be inserted beneath the skin to monitor glucose or insulin levels in real time, Bisker reported. The team imagines putting a small patch that contains a wireless device on the skin just above the embedded sensor. The patch would shine light on the sensor and measure its fluorescence, then transmit that data to a mobile phone for real-time monitoring.

Another version of the sensor, developed3 at MIT by biomedical engineer Nicole Iverson and colleagues, detects nitric oxide. This signalling molecule typically indicates inflammation and is associated with many cancer cells. When embedded in a hydrogel matrix, the sensor kept working in mice for more than 400 days and caused no local inflammation, MIT chemical engineer Michael Lee reported. The nitric oxide sensors also performed well when injected into the bloodstreams of mice, successfully passing through small capillaries in the lungs, which are an area of concern for nanotube toxicity. …

There’s at least one corporate laboratory (Google X), working on biosensors although their focus is a little different. From a Jan. 9, 2015 article by Brian Womack and Anna Edney for BloombergBusiness,

Google Inc. sent employees with ties to its secretive X research group to meet with U.S. regulators who oversee medical devices, raising the possibility of a new product that may involve biosensors from the unit that developed computerized glasses.

The meeting included at least four Google workers, some of whom have connections with Google X — and have done research on sensors, including contact lenses that help wearers monitor their biological data. Google staff met with those at the Food and Drug Administration who regulate eye devices and diagnostics for heart conditions, according to the agency’s public calendar. [emphasis mine]

This approach from Google is considered noninvasive,

“There is actually one interface on the surface of the body that can literally provide us with a window of what happens inside, and that’s the surface of the eye,” Parviz [Babak Parviz, … was involved in the Google Glass project and has talked about putting displays on contact lenses, including lenses that monitor wearer’s health]  said in a video posted on YouTube. “It’s a very interesting chemical interface.”

Of course, the assumption is that all this monitoring is going to result in  healthier people but I can’t help thinking about an old saying ‘a little knowledge can be a dangerous thing’. For example, we lived in a world where bacteria roamed free and then we learned how to make them visible, determined they were disease-causing, and began campaigns for killing them off. Now, it turns out that at least some bacteria are good for us and, moreover, we’ve created other, more dangerous bacteria that are drug-resistant. Based on the bacteria example, is it possible that with these biosensors we will observe new phenomena and make similar mistakes?

Scaling graphene production up to industrial strength

If graphene is going to be a ubiquitous material in the future, production methods need to change. An Aug. 7, 2015 news item on Nanowerk announces a new technique to achieve that goal,

Producing graphene in bulk is critical when it comes to the industrial exploitation of this exceptional two-dimensional material. To that end, [European Commission] Graphene Flagship researchers have developed a novel variant on the chemical vapour deposition process which yields high quality material in a scalable manner. This advance should significantly narrow the performance gap between synthetic and natural graphene.

An Aug. 7, 2015 European Commission Graphene Flagship press release by Francis Sedgemore, which originated the news item, describes the problem,

Media-friendly Nobel laureates peeling layers of graphene from bulk graphite with sticky tape may capture the public imagination, but as a manufacturing process the technique is somewhat lacking. Mechanical exfoliation may give us pristine graphene, but industry requires scalable and cost-effective production processes with much higher yields.

On to the new method (from the press release),

Flagship-affiliated physicists from RWTH Aachen University and Forschungszentrum Jülich have together with colleagues in Japan devised a method for peeling graphene flakes from a CVD substrate with the help of intermolecular forces. …

Key to the process is the strong van der Waals interaction that exists between graphene and hexagonal boron nitride, another 2d material within which it is encapsulated. The van der Waals force is the attractive sum of short-range electric dipole interactions between uncharged molecules.

Thanks to strong van der Waals interactions between graphene and boron nitride, CVD graphene can be separated from the copper and transferred to an arbitrary substrate. The process allows for re-use of the catalyst copper foil in further growth cycles, and minimises contamination of the graphene due to processing.

Raman spectroscopy and transport measurements on the graphene/boron nitride heterostructures reveals high electron mobilities comparable with those observed in similar assemblies based on exfoliated graphene. Furthermore – and this comes as something of a surprise to the researchers – no noticeable performance changes are detected between devices developed in the first and subsequent growth cycles. This confirms the copper as a recyclable resource in the graphene fabrication process.

“Chemical vapour deposition is a highly scalable and cost-efficient technology,” says Christoph Stampfer, head of the 2nd Institute of Physics A in Aachen, and co-author of the technical article. “Until now, graphene synthesised this way has been significantly lower in quality than that obtained with the scotch-tape method, especially when it comes to the material’s electronic properties. But no longer. We demonstrate a novel fabrication process based on CVD that yields ultra-high quality synthetic graphene samples. The process is in principle suitable for industrial-scale production, and narrows the gap between graphene research and its technological applications.”

With their dry-transfer process, Banszerus and his colleagues have shown that the electronic properties of CVD-grown graphene can in principle match those of ultrahigh-mobility exfoliated graphene. The key is to transfer CVD graphene from its growth substrate in such a way that chemical contamination is avoided. The high mobility of pristine graphene is thus preserved, and the approach allows for the substrate material to be recycled without degradation.

Here’s a link to and citation for the paper,

Ultrahigh-mobility graphene devices from chemical vapor deposition on reusable copper by Luca Banszerus, Michael Schmitz, Stephan Engels, Jan Dauber, Martin Oellers, Federica Haupt, Kenji Watanabe, Takashi Taniguchi, Bernd Beschoten, and Christoph Stampfer. Science Advances  31 Jul 2015: Vol. 1, no. 6, e1500222 DOI: 10.1126/sciadv.1500222

This article appears to be open access.

For those interested in finding out more about chemical vapour deposition (CVD), David Chandler has written a June 19, 2015 article for the Massachusetts Institute of Technology (MIT) titled:  Explained: chemical vapor deposition (Technique enables production of pure, uniform coatings of metals or polymers, even on contoured surfaces.)

Nanoscale imaging of a mouse brain

Researchers have developed a new brain imaging tool they would like to use as a founding element for a national brain observatory. From a July 30, 2015 news item on Azonano,

A new imaging tool developed by Boston scientists could do for the brain what the telescope did for space exploration.

In the first demonstration of how the technology works, published July 30 in the journal Cell, the researchers look inside the brain of an adult mouse at a scale previously unachievable, generating images at a nanoscale resolution. The inventors’ long-term goal is to make the resource available to the scientific community in the form of a national brain observatory.

A July 30, 2015 Cell Press news release on EurekAlert, which originated the news item, expands on the theme,

“I’m a strong believer in bottom up-science, which is a way of saying that I would prefer to generate a hypothesis from the data and test it,” says senior study author Jeff Lichtman, of Harvard University. “For people who are imagers, being able to see all of these details is wonderful and we’re getting an opportunity to peer into something that has remained somewhat intractable for so long. It’s about time we did this, and it is what people should be doing about things we don’t understand.”

The researchers have begun the process of mining their imaging data by looking first at an area of the brain that receives sensory information from mouse whiskers, which help the animals orient themselves and are even more sensitive than human fingertips. The scientists used a program called VAST, developed by co-author Daniel Berger of Harvard and the Massachusetts Institute of Technology, to assign different colors and piece apart each individual “object” (e.g., neuron, glial cell, blood vessel cell, etc.).

“The complexity of the brain is much more than what we had ever imagined,” says study first author Narayanan “Bobby” Kasthuri, of the Boston University School of Medicine. “We had this clean idea of how there’s a really nice order to how neurons connect with each other, but if you actually look at the material it’s not like that. The connections are so messy that it’s hard to imagine a plan to it, but we checked and there’s clearly a pattern that cannot be explained by randomness.”

The researchers see great potential in the tool’s ability to answer questions about what a neurological disorder actually looks like in the brain, as well as what makes the human brain different from other animals and different between individuals. Who we become is very much a product of the connections our neurons make in response to various life experiences. To be able to compare the physical neuron-to-neuron connections in an infant, a mathematical genius, and someone with schizophrenia would be a leap in our understanding of how our brains shape who we are (or vice versa).

The cost and data storage demands for this type of research are still high, but the researchers expect expenses to drop over time (as has been the case with genome sequencing). To facilitate data sharing, the scientists are now partnering with Argonne National Laboratory with the hopes of creating a national brain laboratory that neuroscientists around the world can access within the next few years.

“It’s bittersweet that there are many scientists who think this is a total waste of time as well as a big investment in money and effort that could be better spent answering questions that are more proximal,” Lichtman says. “As long as data is showing you things that are unexpected, then you’re definitely doing the right thing. And we are certainly far from being out of the surprise element. There’s never a time when we look at this data that we don’t see something that we’ve never seen before.”

Here’s a link to and a citation for the paper,

Saturated Reconstruction of a Volume of Neocortex by Narayanan Kasthuri, Kenneth Jeffrey Hayworth, Daniel Raimund Berger, Richard Lee Schalek, José Angel Conchello, Seymour Knowles-Barley, Dongil Lee, Amelio Vázquez-Reina, Verena Kaynig, Thouis Raymond Jones, Mike Roberts, Josh Lyskowski Morgan, Juan Carlos Tapia, H. Sebastian Seung, William Gray Roncal, Joshua Tzvi Vogelstein, Randal Burns, Daniel Lewis Sussman, Carey Eldin Priebe, Hanspeter Pfister, Jeff William Lichtman. Cell Volume 162, Issue 3, p648–661, 30 July 2015 DOI: http://dx.doi.org/10.1016/j.cell.2015.06.054

This appears to be an open access paper.

Nanomaterials and UV (ultraviolet) light for environmental cleanups

I think this is the first time I’ve seen anything about a technology that removes toxic materials from both water and soil; it’s usually one or the other. A July 22, 2015 news item on Nanowerk makes the announcement (Note: A link has been removed),

Many human-made pollutants in the environment resist degradation through natural processes, and disrupt hormonal and other systems in mammals and other animals. Removing these toxic materials — which include pesticides and endocrine disruptors such as bisphenol A (BPA) — with existing methods is often expensive and time-consuming.

In a new paper published this week in Nature Communications (“Nanoparticles with photoinduced precipitation for the extraction of pollutants from water and soil”), researchers from MIT [Massachusetts Institute of Technology] and the Federal University of Goiás in Brazil demonstrate a novel method for using nanoparticles and ultraviolet (UV) light to quickly isolate and extract a variety of contaminants from soil and water.

A July 21, 2015 MIT news release by Jonathan Mingle, which originated the news item, describes the inspiration and the research in more detail,

Ferdinand Brandl and Nicolas Bertrand, the two lead authors, are former postdocs in the laboratory of Robert Langer, the David H. Koch Institute Professor at MIT’s Koch Institute for Integrative Cancer Research. (Eliana Martins Lima, of the Federal University of Goiás, is the other co-author.) Both Brandl and Bertrand are trained as pharmacists, and describe their discovery as a happy accident: They initially sought to develop nanoparticles that could be used to deliver drugs to cancer cells.

Brandl had previously synthesized polymers that could be cleaved apart by exposure to UV light. But he and Bertrand came to question their suitability for drug delivery, since UV light can be damaging to tissue and cells, and doesn’t penetrate through the skin. When they learned that UV light was used to disinfect water in certain treatment plants, they began to ask a different question.

“We thought if they are already using UV light, maybe they could use our particles as well,” Brandl says. “Then we came up with the idea to use our particles to remove toxic chemicals, pollutants, or hormones from water, because we saw that the particles aggregate once you irradiate them with UV light.”

A trap for ‘water-fearing’ pollution

The researchers synthesized polymers from polyethylene glycol, a widely used compound found in laxatives, toothpaste, and eye drops and approved by the Food and Drug Administration as a food additive, and polylactic acid, a biodegradable plastic used in compostable cups and glassware.

Nanoparticles made from these polymers have a hydrophobic core and a hydrophilic shell. Due to molecular-scale forces, in a solution hydrophobic pollutant molecules move toward the hydrophobic nanoparticles, and adsorb onto their surface, where they effectively become “trapped.” This same phenomenon is at work when spaghetti sauce stains the surface of plastic containers, turning them red: In that case, both the plastic and the oil-based sauce are hydrophobic and interact together.

If left alone, these nanomaterials would remain suspended and dispersed evenly in water. But when exposed to UV light, the stabilizing outer shell of the particles is shed, and — now “enriched” by the pollutants — they form larger aggregates that can then be removed through filtration, sedimentation, or other methods.

The researchers used the method to extract phthalates, hormone-disrupting chemicals used to soften plastics, from wastewater; BPA, another endocrine-disrupting synthetic compound widely used in plastic bottles and other resinous consumer goods, from thermal printing paper samples; and polycyclic aromatic hydrocarbons, carcinogenic compounds formed from incomplete combustion of fuels, from contaminated soil.

The process is irreversible and the polymers are biodegradable, minimizing the risks of leaving toxic secondary products to persist in, say, a body of water. “Once they switch to this macro situation where they’re big clumps,” Bertrand says, “you won’t be able to bring them back to the nano state again.”

The fundamental breakthrough, according to the researchers, was confirming that small molecules do indeed adsorb passively onto the surface of nanoparticles.

“To the best of our knowledge, it is the first time that the interactions of small molecules with pre-formed nanoparticles can be directly measured,” they write in Nature Communications.

Nano cleansing

Even more exciting, they say, is the wide range of potential uses, from environmental remediation to medical analysis.

The polymers are synthesized at room temperature, and don’t need to be specially prepared to target specific compounds; they are broadly applicable to all kinds of hydrophobic chemicals and molecules.

“The interactions we exploit to remove the pollutants are non-specific,” Brandl says. “We can remove hormones, BPA, and pesticides that are all present in the same sample, and we can do this in one step.”

And the nanoparticles’ high surface-area-to-volume ratio means that only a small amount is needed to remove a relatively large quantity of pollutants. The technique could thus offer potential for the cost-effective cleanup of contaminated water and soil on a wider scale.

“From the applied perspective, we showed in a system that the adsorption of small molecules on the surface of the nanoparticles can be used for extraction of any kind,” Bertrand says. “It opens the door for many other applications down the line.”

This approach could possibly be further developed, he speculates, to replace the widespread use of organic solvents for everything from decaffeinating coffee to making paint thinners. Bertrand cites DDT, banned for use as a pesticide in the U.S. since 1972 but still widely used in other parts of the world, as another example of a persistent pollutant that could potentially be remediated using these nanomaterials. “And for analytical applications where you don’t need as much volume to purify or concentrate, this might be interesting,” Bertrand says, offering the example of a cheap testing kit for urine analysis of medical patients.

The study also suggests the broader potential for adapting nanoscale drug-delivery techniques developed for use in environmental remediation.

“That we can apply some of the highly sophisticated, high-precision tools developed for the pharmaceutical industry, and now look at the use of these technologies in broader terms, is phenomenal,” says Frank Gu, an assistant professor of chemical engineering at the University of Waterloo in Canada, and an expert in nanoengineering for health care and medical applications.

“When you think about field deployment, that’s far down the road, but this paper offers a really exciting opportunity to crack a problem that is persistently present,” says Gu, who was not involved in the research. “If you take the normal conventional civil engineering or chemical engineering approach to treating it, it just won’t touch it. That’s where the most exciting part is.”

The researchers have made this illustration of their work available,

Nanoparticles that lose their stability upon irradiation with light have been designed to extract endocrine disruptors, pesticides, and other contaminants from water and soils. The system exploits the large surface-to-volume ratio of nanoparticles, while the photoinduced precipitation ensures nanomaterials are not released in the environment. Image: Nicolas Bertrand Courtesy: MIT

Nanoparticles that lose their stability upon irradiation with light have been designed to extract endocrine disruptors, pesticides, and other contaminants from water and soils. The system exploits the large surface-to-volume ratio of nanoparticles, while the photoinduced precipitation ensures nanomaterials are not released in the environment.
Image: Nicolas Bertrand Courtesy: MIT

Here’s a link to and a citation for the paper,

Nanoparticles with photoinduced precipitation for the extraction of pollutants from water and soil by Ferdinand Brandl, Nicolas Bertrand, Eliana Martins Lima & Robert Langer. Nature Communications 6, Article number: 7765 doi:10.1038/ncomms8765 Published 21 July 2015

This paper is open access.

IBM and its working 7nm test chip

I wrote abut IBM and its plans for a 7nm computer chip last year in a July 11, 2014 posting, which featured IBM and mention of HP Labs and other company’s plans for shrinking their computer chips. Almost one year later, IBM has announced, in a July 9, 2015 IBM news release on PRnewswire.com the accomplishment of a working 7nm test chip,

An alliance led by IBM Research (NYSE: IBM) today announced that it has produced the semiconductor industry’s first 7nm (nanometer) node test chips with functioning transistors.  The breakthrough, accomplished in partnership with GLOBALFOUNDRIES and Samsung at SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering (SUNY Poly CNSE), could result in the ability to place more than 20 billion tiny switches — transistors — on the fingernail-sized chips that power everything from smartphones to spacecraft.

To achieve the higher performance, lower power and scaling benefits promised by 7nm technology, researchers had to bypass conventional semiconductor manufacturing approaches. Among the novel processes and techniques pioneered by the IBM Research alliance were a number of industry-first innovations, most notably Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels.

Industry experts consider 7nm technology crucial to meeting the anticipated demands of future cloud computing and Big Data systems, cognitive computing, mobile products and other emerging technologies. Part of IBM’s $3 billion, five-year investment in chip R&D (announced in 2014), this accomplishment was made possible through a unique public-private partnership with New York State and joint development alliance with GLOBALFOUNDRIES, Samsung and equipment suppliers. The team is based at SUNY Poly’s NanoTech Complex in Albany [New York state].

“For business and society to get the most out of tomorrow’s computers and devices, scaling to 7nm and beyond is essential,” said Arvind Krishna, senior vice president and director of IBM Research. “That’s why IBM has remained committed to an aggressive basic research agenda that continually pushes the limits of semiconductor technology. Working with our partners, this milestone builds on decades of research that has set the pace for the microelectronics industry, and positions us to advance our leadership for years to come.”

Microprocessors utilizing 22nm and 14nm technology power today’s servers, cloud data centers and mobile devices, and 10nm technology is well on the way to becoming a mature technology. The IBM Research-led alliance achieved close to 50 percent area scaling improvements over today’s most advanced technology, introduced SiGe channel material for transistor performance enhancement at 7nm node geometries, process innovations to stack them below 30nm pitch and full integration of EUV lithography at multiple levels. These techniques and scaling could result in at least a 50 percent power/performance improvement for next generation mainframe and POWER systems that will power the Big Data, cloud and mobile era.

“Governor Andrew Cuomo’s trailblazing public-private partnership model is catalyzing historic innovation and advancement. Today’s [July 8, 2015] announcement is just one example of our collaboration with IBM, which furthers New York State’s global leadership in developing next generation technologies,” said Dr. Michael Liehr, SUNY Poly Executive Vice President of Innovation and Technology and Vice President of Research.  “Enabling the first 7nm node transistors is a significant milestone for the entire semiconductor industry as we continue to push beyond the limitations of our current capabilities.”

“Today’s announcement marks the latest achievement in our long history of collaboration to accelerate development of next-generation technology,” said Gary Patton, CTO and Head of Worldwide R&D at GLOBALFOUNDRIES. “Through this joint collaborative program based at the Albany NanoTech Complex, we are able to maintain our focus on technology leadership for our clients and partners by helping to address the development challenges central to producing a smaller, faster, more cost efficient generation of semiconductors.”

The 7nm node milestone continues IBM’s legacy of historic contributions to silicon and semiconductor innovation. They include the invention or first implementation of the single cell DRAM, the Dennard Scaling Laws, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator, strained engineering, multi core microprocessors, immersion lithography, high speed SiGe, High-k gate dielectrics, embedded DRAM, 3D chip stacking and Air gap insulators.

In 2014, they were talking about carbon nanotubes with regard to the 7nm chip, this shift to silicon germanium is interesting.

Sebastian Anthony in a July 9, 2015 article for Ars Technica offers some intriguing insight into the accomplishment and the technology (Note: A link has been removed),

… While it should be stressed that commercial 7nm chips remain at least two years away, this test chip from IBM and its partners is extremely significant for three reasons: it’s a working sub-10nm chip (this is pretty significant in itself); it’s the first commercially viable sub-10nm FinFET logic chip that uses silicon-germanium as the channel material; and it appears to be the first commercially viable design produced with extreme ultraviolet (EUV) lithography.

Technologically, SiGe and EUV are both very significant. SiGe has higher electron mobility than pure silicon, which makes it better suited for smaller transistors. The gap between two silicon nuclei is about 0.5nm; as the gate width gets ever smaller (about 7nm in this case), the channel becomes so small that the handful of silicon atoms can’t carry enough current. By mixing some germanium into the channel, electron mobility increases, and adequate current can flow. Silicon generally runs into problems at sub-10nm nodes, and we can expect Intel and TSMC to follow a similar path to IBM, GlobalFoundries, and Samsung (aka the Common Platform alliance).

EUV lithography is an more interesting innovation. Basically, as chip features get smaller, you need a narrower beam of light to etch those features accurately, or you need to use multiple patterning (which we won’t go into here). The current state of the art for lithography is a 193nm ArF (argon fluoride) laser; that is, the wavelength is 193nm wide. Complex optics and multiple painstaking steps are required to etch 14nm features using a 193nm light source. EUV has a wavelength of just 13.5nm, which will handily take us down into the sub-10nm realm, but so far it has proven very difficult and expensive to deploy commercially (it has been just around the corner for quite a few years now).

If you’re interested in the nuances, I recommend reading Anthony’s article in its entirety.

One final comment, there was no discussion of electrodes or other metallic components associated with computer chips. The metallic components are a topic of some interest to me (anyway), given some research published by scientists at the Massachusetts Institute of Technology (MIT) last year. From my Oct. 14, 2014 posting,

Research from the Massachusetts Institute of Technology (MIT) has revealed a new property of metal nanoparticles, in this case, silver. From an Oct. 12, 2014 news item on ScienceDaily,

A surprising phenomenon has been found in metal nanoparticles: They appear, from the outside, to be liquid droplets, wobbling and readily changing shape, while their interiors retain a perfectly stable crystal configuration.

The research team behind the finding, led by MIT professor Ju Li, says the work could have important implications for the design of components in nanotechnology, such as metal contacts for molecular electronic circuits. [my emphasis added]

This discovery and others regarding materials and phase changes at ever diminishing sizes hint that a computer with a functioning 7nm chip might be a bit further off than IBM is suggesting.

Yarns of niobium nanowire for small electronic device boost at the University of British Columbia (Canada) and Massachusetts Institute of Technology (US)

It turns out that this research concerning supercapacitors is a collaboration between the University of British Columbia (Canada) and the Massachusetts Institute of Technology (MIT). From a July 7, 2015 news item by Stuart Milne for Azonano,

A team of researchers from MIT and University of British Columbia has discovered an innovative method to deliver short bursts of high power required by wearable electronic devices.

Such devices are used for monitoring health and fitness and as such are rapidly growing in the consumer electronics industry. However, a major drawback of these devices is that they are integrated with small batteries, which fail to deliver sufficient amount of power required for data transmission.

According to the research team, one way to resolve this issue is to develop supercapacitors, which are capable of storing and releasing short bursts of electrical power required to transmit data from smartphones, computers, heart-rate monitors, and other wearable devices. supercapacitors can also prove useful for other applications where short bursts of high power is required, for instance autonomous microrobots.

A July 7, 2015 MIT news release provides more detail about the research,

The new approach uses yarns, made from nanowires of the element niobium, as the electrodes in tiny supercapacitors (which are essentially pairs of electrically conducting fibers with an insulator between). The concept is described in a paper in the journal ACS Applied Materials and Interfaces by MIT professor of mechanical engineering Ian W. Hunter, doctoral student Seyed M. Mirvakili, and three others at the University of British Columbia.

Nanotechnology researchers have been working to increase the performance of supercapacitors for the past decade. Among nanomaterials, carbon-based nanoparticles — such as carbon nanotubes and graphene — have shown promising results, but they suffer from relatively low electrical conductivity, Mirvakili says.

In this new work, he and his colleagues have shown that desirable characteristics for such devices, such as high power density, are not unique to carbon-based nanoparticles, and that niobium nanowire yarn is a promising an alternative.

“Imagine you’ve got some kind of wearable health-monitoring system,” Hunter says, “and it needs to broadcast data, for example using Wi-Fi, over a long distance.” At the moment, the coin-sized batteries used in many small electronic devices have very limited ability to deliver a lot of power at once, which is what such data transmissions need.

“Long-distance Wi-Fi requires a fair amount of power,” says Hunter, the George N. Hatsopoulos Professor in Thermodynamics in MIT’s Department of Mechanical Engineering, “but it may not be needed for very long.” Small batteries are generally poorly suited for such power needs, he adds.

“We know it’s a problem experienced by a number of companies in the health-monitoring or exercise-monitoring space. So an alternative is to go to a combination of a battery and a capacitor,” Hunter says: the battery for long-term, low-power functions, and the capacitor for short bursts of high power. Such a combination should be able to either increase the range of the device, or — perhaps more important in the marketplace — to significantly reduce size requirements.

The new nanowire-based supercapacitor exceeds the performance of existing batteries, while occupying a very small volume. “If you’ve got an Apple Watch and I shave 30 percent off the mass, you may not even notice,” Hunter says. “But if you reduce the volume by 30 percent, that would be a big deal,” he says: Consumers are very sensitive to the size of wearable devices.

The innovation is especially significant for small devices, Hunter says, because other energy-storage technologies — such as fuel cells, batteries, and flywheels — tend to be less efficient, or simply too complex to be practical when reduced to very small sizes. “We are in a sweet spot,” he says, with a technology that can deliver big bursts of power from a very small device.

Ideally, Hunter says, it would be desirable to have a high volumetric power density (the amount of power stored in a given volume) and high volumetric energy density (the amount of energy in a given volume). “Nobody’s figured out how to do that,” he says. However, with the new device, “We have fairly high volumetric power density, medium energy density, and a low cost,” a combination that could be well suited for many applications.

Niobium is a fairly abundant and widely used material, Mirvakili says, so the whole system should be inexpensive and easy to produce. “The fabrication cost is cheap,” he says. Other groups have made similar supercapacitors using carbon nanotubes or other materials, but the niobium yarns are stronger and 100 times more conductive. Overall, niobium-based supercapacitors can store up to five times as much power in a given volume as carbon nanotube versions.

Niobium also has a very high melting point — nearly 2,500 degrees Celsius — so devices made from these nanowires could potentially be suitable for use in high-temperature applications.

In addition, the material is highly flexible and could be woven into fabrics, enabling wearable forms; individual niobium nanowires are just 140 nanometers in diameter — 140 billionths of a meter across, or about one-thousandth the width of a human hair.

So far, the material has been produced only in lab-scale devices. The next step, already under way, is to figure out how to design a practical, easily manufactured version, the researchers say.

“The work is very significant in the development of smart fabrics and future wearable technologies,” says Geoff Spinks, a professor of engineering at the University of Wollongong, in Australia, who was not associated with this research. This paper, he adds, “convincingly demonstrates the impressive performance of niobium-based fiber supercapacitors.”

Here’s a link to and a citation for the paper,

High-Performance Supercapacitors from Niobium Nanowire Yarns by Seyed M. Mirvakili, Mehr Negar Mirvakili, Peter Englezos, John D. W. Madden, and Ian W. Hunter. ACS Appl. Mater. Interfaces, 2015, 7 (25), pp 13882–13888 DOI: 10.1021/acsami.5b02327 Publication Date (Web): June 12, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

LiquiGlide, a nanotechnology-enabled coating for food packaging and oil and gas pipelines

Getting condiments out of their bottles should be a lot easier in several European countries in the near future. A June 30, 2015 news item on Nanowerk describes the technology and the business deal (Note: A link has been removed),

The days of wasting condiments — and other products — that stick stubbornly to the sides of their bottles may be gone, thanks to MIT [Massachusetts Institute of Technology] spinout LiquiGlide, which has licensed its nonstick coating to a major consumer-goods company.

Developed in 2009 by MIT’s Kripa Varanasi and David Smith, LiquiGlide is a liquid-impregnated coating that acts as a slippery barrier between a surface and a viscous liquid. Applied inside a condiment bottle, for instance, the coating clings permanently to its sides, while allowing the condiment to glide off completely, with no residue.

In 2012, amidst a flurry of media attention following LiquiGlide’s entry in MIT’s $100K Entrepreneurship Competition, Smith and Varanasi founded the startup — with help from the Institute — to commercialize the coating.

Today [June 30, 2015], Norwegian consumer-goods producer Orkla has signed a licensing agreement to use the LiquiGlide’s coating for mayonnaise products sold in Germany, Scandinavia, and several other European nations. This comes on the heels of another licensing deal, with Elmer’s [Elmer’s Glue & Adhesives], announced in March [2015].

A June 30, 2015 MIT news release, which originated the news item, provides more details about the researcher/entrepreneurs’ plans,

But this is only the beginning, says Varanasi, an associate professor of mechanical engineering who is now on LiquiGlide’s board of directors and chief science advisor. The startup, which just entered the consumer-goods market, is courting deals with numerous producers of foods, beauty supplies, and household products. “Our coatings can work with a whole range of products, because we can tailor each coating to meet the specific requirements of each application,” Varanasi says.

Apart from providing savings and convenience, LiquiGlide aims to reduce the surprising amount of wasted products — especially food — that stick to container sides and get tossed. For instance, in 2009 Consumer Reports found that up to 15 percent of bottled condiments are ultimately thrown away. Keeping bottles clean, Varanasi adds, could also drastically cut the use of water and energy, as well as the costs associated with rinsing bottles before recycling. “It has huge potential in terms of critical sustainability,” he says.

Varanasi says LiquiGlide aims next to tackle buildup in oil and gas pipelines, which can cause corrosion and clogs that reduce flow. [emphasis mine] Future uses, he adds, could include coatings for medical devices such as catheters, deicing roofs and airplane wings, and improving manufacturing and process efficiency. “Interfaces are ubiquitous,” he says. “We want to be everywhere.”

The news release goes on to describe the research process in more detail and offers a plug for MIT’s innovation efforts,

LiquiGlide was originally developed while Smith worked on his graduate research in Varanasi’s research group. Smith and Varanasi were interested in preventing ice buildup on airplane surfaces and methane hydrate buildup in oil and gas pipelines.

Some initial work was on superhydrophobic surfaces, which trap pockets of air and naturally repel water. But both researchers found that these surfaces don’t, in fact, shed every bit of liquid. During phase transitions — when vapor turns to liquid, for instance — water droplets condense within microscopic gaps on surfaces, and steadily accumulate. This leads to loss of anti-icing properties of the surface. “Something that is nonwetting to macroscopic drops does not remain nonwetting for microscopic drops,” Varanasi says.

Inspired by the work of researcher David Quéré, of ESPCI in Paris, on slippery “hemisolid-hemiliquid” surfaces, Varanasi and Smith invented permanently wet “liquid-impregnated surfaces” — coatings that don’t have such microscopic gaps. The coatings consist of textured solid material that traps a liquid lubricant through capillary and intermolecular forces. The coating wicks through the textured solid surface, clinging permanently under the product, allowing the product to slide off the surface easily; other materials can’t enter the gaps or displace the coating. “One can say that it’s a self-lubricating surface,” Varanasi says.

Mixing and matching the materials, however, is a complicated process, Varanasi says. Liquid components of the coating, for instance, must be compatible with the chemical and physical properties of the sticky product, and generally immiscible. The solid material must form a textured structure while adhering to the container. And the coating can’t spoil the contents: Foodstuffs, for instance, require safe, edible materials, such as plants and insoluble fibers.

To help choose ingredients, Smith and Varanasi developed the basic scientific principles and algorithms that calculate how the liquid and solid coating materials, and the product, as well as the geometry of the surface structures will all interact to find the optimal “recipe.”

Today, LiquiGlide develops coatings for clients and licenses the recipes to them. Included are instructions that detail the materials, equipment, and process required to create and apply the coating for their specific needs. “The state of the coating we end up with depends entirely on the properties of the product you want to slide over the surface,” says Smith, now LiquiGlide’s CEO.

Having researched materials for hundreds of different viscous liquids over the years — from peanut butter to crude oil to blood — LiquiGlide also has a database of optimal ingredients for its algorithms to pull from when customizing recipes. “Given any new product you want LiquiGlide for, we can zero in on a solution that meets all requirements necessary,” Varanasi says.

MIT: A lab for entrepreneurs

For years, Smith and Varanasi toyed around with commercial applications for LiquiGlide. But in 2012, with help from MIT’s entrepreneurial ecosystem, LiquiGlide went from lab to market in a matter of months.

Initially the idea was to bring coatings to the oil and gas industry. But one day, in early 2012, Varanasi saw his wife struggling to pour honey from its container. “And I thought, ‘We have a solution for that,’” Varanasi says.

The focus then became consumer packaging. Smith and Varanasi took the idea through several entrepreneurship classes — such as 6.933 (Entrepreneurship in Engineering: The Founder’s Journey) — and MIT’s Venture Mentoring Service and Innovation Teams, where student teams research the commercial potential of MIT technologies.

“I did pretty much every last thing you could do,” Smith says. “Because we have such a brilliant network here at MIT, I thought I should take advantage of it.”

That May [2012], Smith, Varanasi, and several MIT students entered LiquiGlide in the MIT $100K Entrepreneurship Competition, earning the Audience Choice Award — and the national spotlight. A video of ketchup sliding out of a LiquiGlide-coated bottle went viral. Numerous media outlets picked up the story, while hundreds of companies reached out to Varanasi to buy the coating. “My phone didn’t stop ringing, my website crashed for a month,” Varanasi says. “It just went crazy.”

That summer [2012], Smith and Varanasi took their startup idea to MIT’s Global Founders’ Skills Accelerator program, which introduced them to a robust network of local investors and helped them build a solid business plan. Soon after, they raised money from family and friends, and won $100,000 at the MassChallenge Entrepreneurship Competition.

When LiquiGlide Inc. launched in August 2012, clients were already knocking down the door. The startup chose a select number to pay for the development and testing of the coating for its products. Within a year, LiquiGlide was cash-flow positive, and had grown from three to 18 employees in its current Cambridge headquarters.

Looking back, Varanasi attributes much of LiquiGlide’s success to MIT’s innovation-based ecosystem, which promotes rapid prototyping for the marketplace through experimentation and collaboration. This ecosystem includes the Deshpande Center for Technological Innovation, the Martin Trust Center for MIT Entrepreneurship, the Venture Mentoring Service, and the Technology Licensing Office, among other initiatives. “Having a lab where we could think about … translating the technology to real-world applications, and having this ability to meet people, and bounce ideas … that whole MIT ecosystem was key,” Varanasi says.

Here’s the latest LiquiGlide video,


Credits:

Video: Melanie Gonick/MIT
Additional footage courtesy of LiquiGlide™
Music sampled from “Candlepower” by Chris Zabriskie
https://freemusicarchive.org/music/Ch…
http://creativecommons.org/licenses/b…

I had thought the EU (European Union) offered more roadblocks to marketing nanotechnology-enabled products used in food packaging than the US. If anyone knows why a US company would market its products in Europe first I would love to find out.

Solar-powered sensors to power the Internet of Things?

As a June 23, 2015 news item on Nanowerk notes, the ‘nternet of things’, will need lots and lots of power,

The latest buzz in the information technology industry regards “the Internet of things” — the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock would have their own embedded sensors that report information directly to networked servers, aiding with maintenance and the coordination of tasks.

Realizing that vision, however, will require extremely low-power sensors that can run for months without battery changes — or, even better, that can extract energy from the environment to recharge.

Last week, at the Symposia on VLSI Technology and Circuits, MIT [Massachusetts Institute of Technology] researchers presented a new power converter chip that can harvest more than 80 percent of the energy trickling into it, even at the extremely low power levels characteristic of tiny solar cells. [emphasis mine] Previous experimental ultralow-power converters had efficiencies of only 40 or 50 percent.

A June 22, 2015 MIT news release (also on EurekAlert), which originated the news item, describes some additional capabilities,

Moreover, the researchers’ chip achieves those efficiency improvements while assuming additional responsibilities. Where its predecessors could use a solar cell to either charge a battery or directly power a device, this new chip can do both, and it can power the device directly from the battery.

All of those operations also share a single inductor — the chip’s main electrical component — which saves on circuit board space but increases the circuit complexity even further. Nonetheless, the chip’s power consumption remains low.

“We still want to have battery-charging capability, and we still want to provide a regulated output voltage,” says Dina Reda El-Damak, an MIT graduate student in electrical engineering and computer science and first author on the new paper. “We need to regulate the input to extract the maximum power, and we really want to do all these tasks with inductor sharing and see which operational mode is the best. And we want to do it without compromising the performance, at very limited input power levels — 10 nanowatts to 1 microwatt — for the Internet of things.”

The prototype chip was manufactured through the Taiwan Semiconductor Manufacturing Company’s University Shuttle Program.

The MIT news release goes on to describe chip specifics,

The circuit’s chief function is to regulate the voltages between the solar cell, the battery, and the device the cell is powering. If the battery operates for too long at a voltage that’s either too high or too low, for instance, its chemical reactants break down, and it loses the ability to hold a charge.

To control the current flow across their chip, El-Damak and her advisor, Anantha Chandrakasan, the Joseph F. and Nancy P. Keithley Professor in Electrical Engineering, use an inductor, which is a wire wound into a coil. When a current passes through an inductor, it generates a magnetic field, which in turn resists any change in the current.

Throwing switches in the inductor’s path causes it to alternately charge and discharge, so that the current flowing through it continuously ramps up and then drops back down to zero. Keeping a lid on the current improves the circuit’s efficiency, since the rate at which it dissipates energy as heat is proportional to the square of the current.

Once the current drops to zero, however, the switches in the inductor’s path need to be thrown immediately; otherwise, current could begin to flow through the circuit in the wrong direction, which would drastically diminish its efficiency. The complication is that the rate at which the current rises and falls depends on the voltage generated by the solar cell, which is highly variable. So the timing of the switch throws has to vary, too.

Electric hourglass

To control the switches’ timing, El-Damak and Chandrakasan use an electrical component called a capacitor, which can store electrical charge. The higher the current, the more rapidly the capacitor fills. When it’s full, the circuit stops charging the inductor.

The rate at which the current drops off, however, depends on the output voltage, whose regulation is the very purpose of the chip. Since that voltage is fixed, the variation in timing has to come from variation in capacitance. El-Damak and Chandrakasan thus equip their chip with a bank of capacitors of different sizes. As the current drops, it charges a subset of those capacitors, whose selection is determined by the solar cell’s voltage. Once again, when the capacitor fills, the switches in the inductor’s path are flipped.

“In this technology space, there’s usually a trend to lower efficiency as the power gets lower, because there’s a fixed amount of energy that’s consumed by doing the work,” says Brett Miwa, who leads a power conversion development project as a fellow at the chip manufacturer Maxim Integrated. “If you’re only coming in with a small amount, it’s hard to get most of it out, because you lose more as a percentage. [El-Damak’s] design is unusually efficient for how low a power level she’s at.”

“One of the things that’s most notable about it is that it’s really a fairly complete system,” he adds. “It’s really kind of a full system-on-a chip for power management. And that makes it a little more complicated, a little bit larger, and a little bit more comprehensive than some of the other designs that might be reported in the literature. So for her to still achieve these high-performance specs in a much more sophisticated system is also noteworthy.”

I wonder how close they are to commercializing this chip (see below),

The MIT researchers' prototype for a chip measuring 3 millimeters by 3 millimeters. The magnified detail shows the chip's main control circuitry, including the startup electronics; the controller that determines whether to charge the battery, power a device, or both; and the array of switches that control current flow to an external inductor coil. This active area measures just 2.2 millimeters by 1.1 millimeters. (click on image to enlarge) Read more: Toward tiny, solar-powered sensors. Courtesy: MIT

The MIT researchers’ prototype for a chip measuring 3 millimeters by 3 millimeters. The magnified detail shows the chip’s main control circuitry, including the startup electronics; the controller that determines whether to charge the battery, power a device, or both; and the array of switches that control current flow to an external inductor coil. This active area measures just 2.2 millimeters by 1.1 millimeters. (click on image to enlarge)
Courtesy: MIT

I sing the body cyber: two projects funded by the US National Science Foundation

Points to anyone who recognized the reference to Walt Whitman’s poem, “I sing the body electric,” from his classic collection, Leaves of Grass (1867 edition; h/t Wikipedia entry). I wonder if the cyber physical systems (CPS) work being funded by the US National Science Foundation (NSF) in the US will occasion poetry too.

More practically, a May 15, 2015 news item on Nanowerk, describes two cyber physical systems (CPS) research projects newly funded by the NSF,

Today [May 12, 2015] the National Science Foundation (NSF) announced two, five-year, center-scale awards totaling $8.75 million to advance the state-of-the-art in medical and cyber-physical systems (CPS).

One project will develop “Cyberheart”–a platform for virtual, patient-specific human heart models and associated device therapies that can be used to improve and accelerate medical-device development and testing. The other project will combine teams of microrobots with synthetic cells to perform functions that may one day lead to tissue and organ re-generation.

CPS are engineered systems that are built from, and depend upon, the seamless integration of computation and physical components. Often called the “Internet of Things,” CPS enable capabilities that go beyond the embedded systems of today.

“NSF has been a leader in supporting research in cyber-physical systems, which has provided a foundation for putting the ‘smart’ in health, transportation, energy and infrastructure systems,” said Jim Kurose, head of Computer & Information Science & Engineering at NSF. “We look forward to the results of these two new awards, which paint a new and compelling vision for what’s possible for smart health.”

Cyber-physical systems have the potential to benefit many sectors of our society, including healthcare. While advances in sensors and wearable devices have the capacity to improve aspects of medical care, from disease prevention to emergency response, and synthetic biology and robotics hold the promise of regenerating and maintaining the body in radical new ways, little is known about how advances in CPS can integrate these technologies to improve health outcomes.

These new NSF-funded projects will investigate two very different ways that CPS can be used in the biological and medical realms.

A May 12, 2015 NSF news release (also on EurekAlert), which originated the news item, describes the two CPS projects,

Bio-CPS for engineering living cells

A team of leading computer scientists, roboticists and biologists from Boston University, the University of Pennsylvania and MIT have come together to develop a system that combines the capabilities of nano-scale robots with specially designed synthetic organisms. Together, they believe this hybrid “bio-CPS” will be capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.

“We bring together synthetic biology and micron-scale robotics to engineer the emergence of desired behaviors in populations of bacterial and mammalian cells,” said Calin Belta, a professor of mechanical engineering, systems engineering and bioinformatics at Boston University and principal investigator on the project. “This project will impact several application areas ranging from tissue engineering to drug development.”

The project builds on previous research by each team member in diverse disciplines and early proof-of-concept designs of bio-CPS. According to the team, the research is also driven by recent advances in the emerging field of synthetic biology, in particular the ability to rapidly incorporate new capabilities into simple cells. Researchers so far have not been able to control and coordinate the behavior of synthetic cells in isolation, but the introduction of microrobots that can be externally controlled may be transformative.

In this new project, the team will focus on bio-CPS with the ability to sense, transport and work together. As a demonstration of their idea, they will develop teams of synthetic cell/microrobot hybrids capable of constructing a complex, fabric-like surface.

Vijay Kumar (University of Pennsylvania), Ron Weiss (MIT), and Douglas Densmore (BU) are co-investigators of the project.

Medical-CPS and the ‘Cyberheart’

CPS such as wearable sensors and implantable devices are already being used to assess health, improve quality of life, provide cost-effective care and potentially speed up disease diagnosis and prevention. [emphasis mine]

Extending these efforts, researchers from seven leading universities and centers are working together to develop far more realistic cardiac and device models than currently exist. This so-called “Cyberheart” platform can be used to test and validate medical devices faster and at a far lower cost than existing methods. CyberHeart also can be used to design safe, patient-specific device therapies, thereby lowering the risk to the patient.

“Innovative ‘virtual’ design methodologies for implantable cardiac medical devices will speed device development and yield safer, more effective devices and device-based therapies, than is currently possible,” said Scott Smolka, a professor of computer science at Stony Brook University and one of the principal investigators on the award.

The group’s approach combines patient-specific computational models of heart dynamics with advanced mathematical techniques for analyzing how these models interact with medical devices. The analytical techniques can be used to detect potential flaws in device behavior early on during the device-design phase, before animal and human trials begin. They also can be used in a clinical setting to optimize device settings on a patient-by-patient basis before devices are implanted.

“We believe that our coordinated, multi-disciplinary approach, which balances theoretical, experimental and practical concerns, will yield transformational results in medical-device design and foundations of cyber-physical system verification,” Smolka said.

The team will develop virtual device models which can be coupled together with virtual heart models to realize a full virtual development platform that can be subjected to computational analysis and simulation techniques. Moreover, they are working with experimentalists who will study the behavior of virtual and actual devices on animals’ hearts.

Co-investigators on the project include Edmund Clarke (Carnegie Mellon University), Elizabeth Cherry (Rochester Institute of Technology), W. Rance Cleaveland (University of Maryland), Flavio Fenton (Georgia Tech), Rahul Mangharam (University of Pennsylvania), Arnab Ray (Fraunhofer Center for Experimental Software Engineering [Germany]) and James Glimm and Radu Grosu (Stony Brook University). Richard A. Gray of the U.S. Food and Drug Administration is another key contributor.

It is fascinating to observe how terminology is shifting from pacemakers and deep brain stimulators as implants to “CPS such as wearable sensors and implantable devices … .” A new category has been created, CPS, which conjoins medical devices with other sensing devices such as wearable fitness monitors found in the consumer market. I imagine it’s an attempt to quell fears about injecting strange things into or adding strange things to your body—microrobots and nanorobots partially derived from synthetic biology research which are “… capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.” They’ve also sneaked in a reference to synthetic biology, an area of research where some concerns have been expressed, from my March 19, 2013 post about a poll and synthetic biology concerns,

In our latest survey, conducted in January 2013, three-fourths of respondents say they have heard little or nothing about synthetic biology, a level consistent with that measured in 2010. While initial impressions about the science are largely undefined, these feelings do not necessarily become more positive as respondents learn more. The public has mixed reactions to specific synthetic biology applications, and almost one-third of respondents favor a ban “on synthetic biology research until we better understand its implications and risks,” while 61 percent think the science should move forward.

I imagine that for scientists, 61% in favour of more research is not particularly comforting given how easily and quickly public opinion can shift.