Tag Archives: US Defense Advanced Research Projects Agency

Congratulate China on the world’s first quantum communication network

China has some exciting news about the world’s first quantum network; it’s due to open in late August 2017 so you may want to have your congratulations in order for later this month.

An Aug. 4, 2017 news item on phys.org makes the announcement,

As malicious hackers find ever more sophisticated ways to launch attacks, China is about to launch the Jinan Project, the world’s first unhackable computer network, and a major milestone in the development of quantum technology.

Named after the eastern Chinese city where the technology was developed, the network is planned to be fully operational by the end of August 2017. Jinan is the hub of the Beijing-Shanghai quantum network due to its strategic location between the two principal Chinese metropolises.

“We plan to use the network for national defence, finance and other fields, and hope to spread it out as a pilot that if successful can be used across China and the whole world,” commented Zhou Fei, assistant director of the Jinan Institute of Quantum Technology, who was speaking to Britain’s Financial Times.

An Aug. 3, 2017 CORDIS (Community Research and Development Research Information Service [for the European Commission]) press release, which originated the news item, provides more detail about the technology,

By launching the network, China will become the first country worldwide to implement quantum technology for a real life, commercial end. It also highlights that China is a key global player in the rush to develop technologies based on quantum principles, with the EU and the United States also vying for world leadership in the field.

The network, known as a Quantum Key Distribution (QKD) network, is more secure than widely used electronic communication equivalents. Unlike a conventional telephone or internet cable, which can be tapped without the sender or recipient being aware, a QKD network alerts both users to any tampering with the system as soon as it occurs. This is because tampering immediately alters the information being relayed, with the disturbance being instantly recognisable. Once fully implemented, it will make it almost impossible for other governments to listen in on Chinese communications.

In the Jinan network, some 200 users from China’s military, government, finance and electricity sectors will be able to send messages safe in the knowledge that only they are reading them. It will be the world’s longest land-based quantum communications network, stretching over 2 000 km.

Also speaking to the ‘Financial Times’, quantum physicist Tim Byrnes, based at New York University’s (NYU) Shanghai campus commented: ‘China has achieved staggering things with quantum research… It’s amazing how quickly China has gotten on with quantum research projects that would be too expensive to do elsewhere… quantum communication has been taken up by the commercial sector much more in China compared to other countries, which means it is likely to pull ahead of Europe and US in the field of quantum communication.’

However, Europe is also determined to also be at the forefront of the ‘quantum revolution’ which promises to be one of the major defining technological phenomena of the twenty-first century. The EU has invested EUR 550 million into quantum technologies and has provided policy support to researchers through the 2016 Quantum Manifesto.

Moreover, with China’s latest achievement (and a previous one already notched up from July 2017 when its quantum satellite – the world’s first – sent a message to Earth on a quantum communication channel), it looks like the race to be crowned the world’s foremost quantum power is well and truly underway…

Prior to this latest announcement, Chinese scientists had published work about quantum satellite communications, a development that makes their imminent terrestrial quantum network possible. Gabriel Popkin wrote about the quantum satellite in a June 15, 2017 article Science magazine,

Quantum entanglement—physics at its strangest—has moved out of this world and into space. In a study that shows China’s growing mastery of both the quantum world and space science, a team of physicists reports that it sent eerily intertwined quantum particles from a satellite to ground stations separated by 1200 kilometers, smashing the previous world record. The result is a stepping stone to ultrasecure communication networks and, eventually, a space-based quantum internet.

“It’s a huge, major achievement,” says Thomas Jennewein, a physicist at the University of Waterloo in Canada. “They started with this bold idea and managed to do it.”

Entanglement involves putting objects in the peculiar limbo of quantum superposition, in which an object’s quantum properties occupy multiple states at once: like Schrödinger’s cat, dead and alive at the same time. Then those quantum states are shared among multiple objects. Physicists have entangled particles such as electrons and photons, as well as larger objects such as superconducting electric circuits.

Theoretically, even if entangled objects are separated, their precarious quantum states should remain linked until one of them is measured or disturbed. That measurement instantly determines the state of the other object, no matter how far away. The idea is so counterintuitive that Albert Einstein mocked it as “spooky action at a distance.”

Starting in the 1970s, however, physicists began testing the effect over increasing distances. In 2015, the most sophisticated of these tests, which involved measuring entangled electrons 1.3 kilometers apart, showed once again that spooky action is real.

Beyond the fundamental result, such experiments also point to the possibility of hack-proof communications. Long strings of entangled photons, shared between distant locations, can be “quantum keys” that secure communications. Anyone trying to eavesdrop on a quantum-encrypted message would disrupt the shared key, alerting everyone to a compromised channel.

But entangled photons degrade rapidly as they pass through the air or optical fibers. So far, the farthest anyone has sent a quantum key is a few hundred kilometers. “Quantum repeaters” that rebroadcast quantum information could extend a network’s reach, but they aren’t yet mature. Many physicists have dreamed instead of using satellites to send quantum information through the near-vacuum of space. “Once you have satellites distributing your quantum signals throughout the globe, you’ve done it,” says Verónica Fernández Mármol, a physicist at the Spanish National Research Council in Madrid. …

Popkin goes on to detail the process for making the discovery in easily accessible (for the most part) writing and in a video and a graphic.

Russell Brandom writing for The Verge in a June 15, 2017 article about the Chinese quantum satellite adds detail about previous work and teams in other countries also working on the challenge (Note: Links have been removed),

Quantum networking has already shown promise in terrestrial fiber networks, where specialized routing equipment can perform the same trick over conventional fiber-optic cable. The first such network was a DARPA-funded connection established in 2003 between Harvard, Boston University, and a private lab. In the years since, a number of companies have tried to build more ambitious connections. The Swiss company ID Quantique has mapped out a quantum network that would connect many of North America’s largest data centers; in China, a separate team is working on a 2,000-kilometer quantum link between Beijing and Shanghai, which would rely on fiber to span an even greater distance than the satellite link. Still, the nature of fiber places strict limits on how far a single photon can travel.

According to ID Quantique, a reliable satellite link could connect the existing fiber networks into a single globe-spanning quantum network. “This proves the feasibility of quantum communications from space,” ID Quantique CEO Gregoire Ribordy tells The Verge. “The vision is that you have regional quantum key distribution networks over fiber, which can connect to each other through the satellite link.”

China isn’t the only country working on bringing quantum networks to space. A collaboration between the UK’s University of Strathclyde and the National University of Singapore is hoping to produce the same entanglement in cheap, readymade satellites called Cubesats. A Canadian team is also developing a method of producing entangled photons on the ground before sending them into space.

I wonder if there’s going to be an invitational event for scientists around the world to celebrate the launch.

3-D integration of nanotechnologies on a single computer chip

By integrating nanomaterials , a new technique for a 3D computer chip capable of handling today’s huge amount of data has been developed. Weirdly, the first two paragraphs of a July 5, 2017 news item on Nanowerk do not convey the main point (Note: A link has been removed),

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature (“Three-dimensional integration of nanotechnologies for computing and data storage on a single chip”), by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

This image helps to convey the main points,

Instead of relying on silicon-based devices, a new chip uses carbon nanotubes and resistive random-access memory (RRAM) cells. The two are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. Courtesy MIT

As I hove been quite impressed with their science writing, it was a bit surprising to find that the Massachusetts Institute of Technology (MIT) had issued this news release (news item) as it didn’t follow the ‘rules’, i.e., cover as many of the journalistic questions (Who, What, Where, When, Why, and, sometimes, How) as possible in the first sentence/paragraph. This is written more in the style of a magazine article and so the details take a while to emerge, from a July 5, 2017 MIT news release, which originated the news item,

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.

This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.

“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.

“The new 3-D computer architecture provides dense and fine-grained integration of computating and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”

To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.

Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

“It leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” Rabaey says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

“One big advantage of our demonstration is that it is compatible with today’s silicon infrastructure, both in terms of fabrication and design,” says Howe.

“The fact that this strategy is both CMOS [complementary metal-oxide-semiconductor] compatible and viable for a variety of applications suggests that it is a significant step in the continued advancement of Moore’s Law,” says Ken Hansen, president and CEO of the Semiconductor Research Corporation, which supported the research. “To sustain the promise of Moore’s Law economics, innovative heterogeneous approaches are required as dimensional scaling is no longer sufficient. This pioneering work embodies that philosophy.”

The team is working to improve the underlying nanotechnologies, while exploring the new 3-D computer architecture. For Shulaker, the next step is working with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system that take advantage of its ability to carry out sensing and data processing on the same chip.

So, for example, the devices could be used to detect signs of disease by sensing particular compounds in a patient’s breath, says Shulaker.

“The technology could not only improve traditional computing, but it also opens up a whole new range of applications that we can target,” he says. “My students are now investigating how we can produce chips that do more than just computing.”

“This demonstration of the 3-D integration of sensors, memory, and logic is an exceptionally innovative development that leverages current CMOS technology with the new capabilities of carbon nanotube field–effect transistors,” says Sam Fuller, CTO emeritus of Analog Devices, who was not involved in the research. “This has the potential to be the platform for many revolutionary applications in the future.”

This work was funded by the Defense Advanced Research Projects Agency [DARPA], the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

Here’s a link to and a citation for the paper,

Three-dimensional integration of nanotechnologies for computing and data storage on a single chip by Max M. Shulaker, Gage Hills, Rebecca S. Park, Roger T. Howe, Krishna Saraswat, H.-S. Philip Wong, & Subhasish Mitra. Nature 547, 74–78 (06 July 2017) doi:10.1038/nature22994 Published online 05 July 2017

This paper is behind a paywall.

IBM to build brain-inspired AI supercomputing system equal to 64 million neurons for US Air Force

This is the second IBM computer announcement I’ve stumbled onto within the last 4 weeks or so,  which seems like a veritable deluge given the last time I wrote about IBM’s computing efforts was in an Oct. 8, 2015 posting about carbon nanotubes,. I believe that up until now that was my  most recent posting about IBM and computers.

Moving onto the news, here’s more from a June 23, 3017 news item on Nanotechnology Now,

IBM (NYSE: IBM) and the U.S. Air Force Research Laboratory (AFRL) today [June 23, 2017] announced they are collaborating on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The system’s advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb – a mere 10 watts to power.

A June 23, 2017 IBM news release, which originated the news item, describes the proposed collaboration, which is based on IBM’s TrueNorth brain-inspired chip architecture (see my Aug. 8, 2014 posting for more about TrueNorth),

IBM researchers believe the brain-inspired, neural network design of TrueNorth will be far more efficient for pattern recognition and integrated sensory processing than systems powered by conventional chips. AFRL is investigating applications of the system in embedded, mobile, autonomous settings where, today, size, weight and power (SWaP) are key limiting factors.

The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this “right-brain” perception capability of the system with the “left-brain” symbol processing capabilities of conventional computer systems. The large scale of the system will enable both “data parallelism” where multiple data sources can be run in parallel against the same neural network and “model parallelism” where independent neural networks form an ensemble that can be run in parallel on the same data.

“AFRL was the earliest adopter of TrueNorth for converting data into decisions,” said Daniel S. Goddard, director, information directorate, U.S. Air Force Research Lab. “The new neurosynaptic system will be used to enable new computing capabilities important to AFRL’s mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage.”

“The evolution of the IBM TrueNorth Neurosynaptic System is a solid proof point in our quest to lead the industry in AI hardware innovation,” said Dharmendra S. Modha, IBM Fellow, chief scientist, brain-inspired computing, IBM Research – Almaden. “Over the last six years, IBM has expanded the number of neurons per system from 256 to more than 64 million – an 800 percent annual increase over six years.’’

The system fits in a 4U-high (7”) space in a standard server rack and eight such systems will enable the unprecedented scale of 512 million neurons per rack. A single processor in the system consists of 5.4 billion transistors organized into 4,096 neural cores creating an array of 1 million digital neurons that communicate with one another via 256 million electrical synapses.    For CIFAR-100 dataset, TrueNorth achieves near state-of-the-art accuracy, while running at >1,500 frames/s and using 200 mW (effectively >7,000 frames/s per Watt) – orders of magnitude lower speed and energy than a conventional computer running inference on the same neural network.

The IBM TrueNorth Neurosynaptic System was originally developed under the auspices of Defense Advanced Research Projects Agency’s (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program in collaboration with Cornell University. In 2016, the TrueNorth Team received the inaugural Misha Mahowald Prize for Neuromorphic Engineering and TrueNorth was accepted into the Computer History Museum.  Research with TrueNorth is currently being performed by more than 40 universities, government labs, and industrial partners on five continents.

There is an IBM video accompanying this news release, which seems more promotional than informational,

The IBM scientist featured in the video has a Dec. 19, 2016 posting on an IBM research blog which provides context for this collaboration with AFRL,

2016 was a big year for brain-inspired computing. My team and I proved in our paper “Convolutional networks for fast, energy-efficient neuromorphic computing” that the value of this breakthrough is that it can perform neural network inference at unprecedented ultra-low energy consumption. Simply stated, our TrueNorth chip’s non-von Neumann architecture mimics the brain’s neural architecture — giving it unprecedented efficiency and scalability over today’s computers.

The brain-inspired TrueNorth processor [is] a 70mW reconfigurable silicon chip with 1 million neurons, 256 million synapses, and 4096 parallel and distributed neural cores. For systems, we present a scale-out system loosely coupling 16 single-chip boards and a scale-up system tightly integrating 16 chips in a 4´4 configuration by exploiting TrueNorth’s native tiling.

For the scale-up systems we summarize our approach to physical placement of neural network, to reduce intra- and inter-chip network traffic. The ecosystem is in use at over 30 universities and government / corporate labs. Our platform is a substrate for a spectrum of applications from mobile and embedded computing to cloud and supercomputers.
TrueNorth Ecosystem for Brain-Inspired Computing: Scalable Systems, Software, and Applications

TrueNorth, once loaded with a neural network model, can be used in real-time as a sensory streaming inference engine, performing rapid and accurate classifications while using minimal energy. TrueNorth’s 1 million neurons consume only 70 mW, which is like having a neurosynaptic supercomputer the size of a postage stamp that can run on a smartphone battery for a week.

Recently, in collaboration with Lawrence Livermore National Laboratory, U.S. Air Force Research Laboratory, and U.S. Army Research Laboratory, we published our fifth paper at IEEE’s prestigious Supercomputing 2016 conference that summarizes the results of the team’s 12.5-year journey (see the associated graphic) to unlock this value proposition. [keep scrolling for the graphic]

Applying the mind of a chip

Three of our partners, U.S. Army Research Lab, U.S. Air Force Research Lab and Lawrence Livermore National Lab, contributed sections to the Supercomputing paper each showcasing a different TrueNorth system, as summarized by my colleagues Jun Sawada, Brian Taba, Pallab Datta, and Ben Shaw:

U.S. Army Research Lab (ARL) prototyped a computational offloading scheme to illustrate how TrueNorth’s low power profile enables computation at the point of data collection. Using the single-chip NS1e board and an Android tablet, ARL researchers created a demonstration system that allows visitors to their lab to hand write arithmetic expressions on the tablet, with handwriting streamed to the NS1e for character recognition, and recognized characters sent back to the tablet for arithmetic calculation.

Of course, the point here is not to make a handwriting calculator, it is to show how TrueNorth’s low power and real time pattern recognition might be deployed at the point of data collection to reduce latency, complexity and transmission bandwidth, as well as back-end data storage requirements in distributed systems.

U.S. Air Force Research Lab (AFRL) contributed another prototype application utilizing a TrueNorth scale-out system to perform a data-parallel text extraction and recognition task. In this application, an image of a document is segmented into individual characters that are streamed to AFRL’s NS1e16 TrueNorth system for parallel character recognition. Classification results are then sent to an inference-based natural language model to reconstruct words and sentences. This system can process 16,000 characters per second! AFRL plans to implement the word and sentence inference algorithms on TrueNorth, as well.

Lawrence Livermore National Lab (LLNL) has a 16-chip NS16e scale-up system to explore the potential of post-von Neumann computation through larger neural models and more complex algorithms, enabled by the native tiling characteristics of the TrueNorth chip. For the Supercomputing paper, they contributed a single-chip application performing in-situ process monitoring in an additive manufacturing process. LLNL trained a TrueNorth network to recognize seven classes related to track weld quality in welds produced by a selective laser melting machine. Real-time weld quality determination allows for closed-loop process improvement and immediate rejection of defective parts. This is one of several applications LLNL is developing to showcase TrueNorth as a scalable platform for low-power, real-time inference.

[downloaded from https://www.ibm.com/blogs/research/2016/12/the-brains-architecture-efficiency-on-a-chip/] Courtesy: IBM

I gather this 2017 announcement is the latest milestone on the TrueNorth journey.

The Imagineers of War: The Untold Story of DARPA, the Pentagon Agency That Changed the World on March 21, 2017 at the Woodrow Wilson International Center for Scholars

I received a March 17, 2017 Woodrow Wilson International Center for Scholars notice (via email) about this upcoming event,

The Imagineers of War: The Untold Story of DARPA [Defense Advanced Research Projects Agency], the Pentagon Agency That Changed the World

There will be a webcast of this event

In The Imagineers of War, Weinberger gives us a definitive history of the agency that has quietly shaped war and technology for nearly 60 years. Founded in 1958 in response to the launch of Sputnik, DARPA’s original mission was to create “the unimagined weapons of the future.” Over the decades, DARPA has been responsible for countless inventions and technologies that extend well beyond military technology.

Weinberger has interviewed more than one hundred former Pentagon officials and scientists involved in DARPA’s projects—many of whom have never spoken publicly about their work with the agency—and pored over countless declassified records from archives around the country, documents obtained under the Freedom of Information Act, and exclusive materials provided by sources. The Imagineers of War is a compelling and groundbreaking history in which science, technology, and politics collide.

Speakers


  • Sharon Weinberger

    Global Fellow
    Author, Imagineers of War, National Security Editor at The Intercept and former Wilson Center Fellow

  • Richard Whittle

    Global Fellow
    Author, Predator: The Secret Origins of the Drone Revolution and Wilson Center Global Fellow

The logistics:

6th Floor, Woodrow Wilson Center

I first heard about DARPA in reference to the internet. A developer I was working with noted that ARPA (DARPA’s predecessor agency) was instrumental in the development of the internet.

You can register for the event here. Should you be interested in the webcast, you can check this page.

As a point of interest, the Wilson Center (also known as the Woodrow Wilson International Center for Scholars) is one of the independent agencies slated to be defunded in the 2017 US budget as proposed by President Donald Trump according to a March 16, 2017 article by Elaine Godfrey for The Atlantic.

Metamaterial could supply air conditioning with zero energy consumption

This is exciting provided they can scale up the metamaterial for industrial use. A Feb. 9, 2017 news item on Nanowerk announces a new metamaterial that could change air conditioning  from the University of Colorado at Boulder (Note: A link has been removed),

A team of University of Colorado Boulder engineers has developed a scalable manufactured metamaterial — an engineered material with extraordinary properties not found in nature — to act as a kind of air conditioning system for structures. It has the ability to cool objects even under direct sunlight with zero energy and water consumption.

When applied to a surface, the metamaterial film cools the object underneath by efficiently reflecting incoming solar energy back into space while simultaneously allowing the surface to shed its own heat in the form of infrared thermal radiation.

The new material, which is described today in the journal Science (“Scalable-manufactured randomized glass-polymer hybrid metamaterial for daytime radiative cooling”), could provide an eco-friendly means of supplementary cooling for thermoelectric power plants, which currently require large amounts of water and electricity to maintain the operating temperatures of their machinery.

A Feb. 9, 2017 University of Colorado at Boulder news release (also on EurekAlert), which originated the news item, expands on the theme (Note: Links have been removed),

The researchers’ glass-polymer hybrid material measures just 50 micrometers thick — slightly thicker than the aluminum foil found in a kitchen — and can be manufactured economically on rolls, making it a potentially viable large-scale technology for both residential and commercial applications.

“We feel that this low-cost manufacturing process will be transformative for real-world applications of this radiative cooling technology,” said Xiaobo Yin, co-director of the research and an assistant professor who holds dual appointments in CU Boulder’s Department of Mechanical Engineering and the Materials Science and Engineering Program. Yin received DARPA’s [US Defense Advanced Research Projects Agency] Young Faculty Award in 2015.

The material takes advantage of passive radiative cooling, the process by which objects naturally shed heat in the form of infrared radiation, without consuming energy. Thermal radiation provides some natural nighttime cooling and is used for residential cooling in some areas, but daytime cooling has historically been more of a challenge. For a structure exposed to sunlight, even a small amount of directly-absorbed solar energy is enough to negate passive radiation.

The challenge for the CU Boulder researchers, then, was to create a material that could provide a one-two punch: reflect any incoming solar rays back into the atmosphere while still providing a means of escape for infrared radiation. To solve this, the researchers embedded visibly-scattering but infrared-radiant glass microspheres into a polymer film. They then added a thin silver coating underneath in order to achieve maximum spectral reflectance.

“Both the glass-polymer metamaterial formation and the silver coating are manufactured at scale on roll-to-roll processes,” added Ronggui Yang, also a professor of mechanical engineering and a Fellow of the American Society of Mechanical Engineers.

“Just 10 to 20 square meters of this material on the rooftop could nicely cool down a single-family house in summer,” said Gang Tan, an associate professor in the University of Wyoming’s Department of Civil and Architectural Engineering and a co-author of the paper.

In addition to being useful for cooling of buildings and power plants, the material could also help improve the efficiency and lifetime of solar panels. In direct sunlight, panels can overheat to temperatures that hamper their ability to convert solar rays into electricity.

“Just by applying this material to the surface of a solar panel, we can cool the panel and recover an additional one to two percent of solar efficiency,” said Yin. “That makes a big difference at scale.”

The engineers have applied for a patent for the technology and are working with CU Boulder’s Technology Transfer Office to explore potential commercial applications. They plan to create a 200-square-meter “cooling farm” prototype in Boulder in 2017.

The invention is the result of a $3 million grant awarded in 2015 to Yang, Yin and Tang by the Energy Department’s Advanced Research Projects Agency-Energy (ARPA-E).

“The key advantage of this technology is that it works 24/7 with no electricity or water usage,” said Yang “We’re excited about the opportunity to explore potential uses in the power industry, aerospace, agriculture and more.”

Here’s a link to and a citation for the paper,

Scalable-manufactured randomized glass-polymer hybrid metamaterial for daytime radiative cooling by Yao Zhai, Yaoguang Ma, Sabrina N. David, Dongliang Zhao, Runnan Lou, Gang Tan, Ronggui Yang, Xiaobo Yin. Science  09 Feb 2017: DOI: 10.1126/science.aai7899

This paper is behind a paywall.

Members of the research team show off the metamaterial (?) Courtesy: University of Colorado at Boulder

I added the caption to this image, which was on the University of Colorado at Boulder’s home page where it accompanied the news release headline on the rotating banner.

Powering up your graphene implants so you don’t get fried in the process

A Sept. 23, 2016 news item on phys.org describes a way of making graphene-based medical implants safer,

In the future, our health may be monitored and maintained by tiny sensors and drug dispensers, deployed within the body and made from graphene—one of the strongest, lightest materials in the world. Graphene is composed of a single sheet of carbon atoms, linked together like razor-thin chicken wire, and its properties may be tuned in countless ways, making it a versatile material for tiny, next-generation implants.

But graphene is incredibly stiff, whereas biological tissue is soft. Because of this, any power applied to operate a graphene implant could precipitously heat up and fry surrounding cells.

Now, engineers from MIT [Massachusetts Institute of Technology] and Tsinghua University in Beijing have precisely simulated how electrical power may generate heat between a single layer of graphene and a simple cell membrane. While direct contact between the two layers inevitably overheats and kills the cell, the researchers found they could prevent this effect with a very thin, in-between layer of water.

A Sept. 23, 2016 MIT news release by Emily Chu, which originated the news item, provides more technical details,

By tuning the thickness of this intermediate water layer, the researchers could carefully control the amount of heat transferred between graphene and biological tissue. They also identified the critical power to apply to the graphene layer, without frying the cell membrane. …

Co-author Zhao Qin, a research scientist in MIT’s Department of Civil and Environmental Engineering (CEE), says the team’s simulations may help guide the development of graphene implants and their optimal power requirements.

“We’ve provided a lot of insight, like what’s the critical power we can accept that will not fry the cell,” Qin says. “But sometimes we might want to intentionally increase the temperature, because for some biomedical applications, we want to kill cells like cancer cells. This work can also be used as guidance [for those efforts.]”

Sandwich model

Typically, heat travels between two materials via vibrations in each material’s atoms. These atoms are always vibrating, at frequencies that depend on the properties of their materials. As a surface heats up, its atoms vibrate even more, causing collisions with other atoms and transferring heat in the process.

The researchers sought to accurately characterize the way heat travels, at the level of individual atoms, between graphene and biological tissue. To do this, they considered the simplest interface, comprising a small, 500-nanometer-square sheet of graphene and a simple cell membrane, separated by a thin layer of water.

“In the body, water is everywhere, and the outer surface of membranes will always like to interact with water, so you cannot totally remove it,” Qin says. “So we came up with a sandwich model for graphene, water, and membrane, that is a crystal clear system for seeing the thermal conductance between these two materials.”

Qin’s colleagues at Tsinghua University had previously developed a model to precisely simulate the interactions between atoms in graphene and water, using density functional theory — a computational modeling technique that considers the structure of an atom’s electrons in determining how that atom will interact with other atoms.

However, to apply this modeling technique to the group’s sandwich model, which comprised about half a million atoms, would have required an incredible amount of computational power. Instead, Qin and his colleagues used classical molecular dynamics — a mathematical technique based on a “force field” potential function, or a simplified version of the interactions between atoms — that enabled them to efficiently calculate interactions within larger atomic systems.

The researchers then built an atom-level sandwich model of graphene, water, and a cell membrane, based on the group’s simplified force field. They carried out molecular dynamics simulations in which they changed the amount of power applied to the graphene, as well as the thickness of the intermediate water layer, and observed the amount of heat that carried over from the graphene to the cell membrane.

Watery crystals

Because the stiffness of graphene and biological tissue is so different, Qin and his colleagues expected that heat would conduct rather poorly between the two materials, building up steeply in the graphene before flooding and overheating the cell membrane. However, the intermediate water layer helped dissipate this heat, easing its conduction and preventing a temperature spike in the cell membrane.

Looking more closely at the interactions within this interface, the researchers made a surprising discovery: Within the sandwich model, the water, pressed against graphene’s chicken-wire pattern, morphed into a similar crystal-like structure.

“Graphene’s lattice acts like a template to guide the water to form network structures,” Qin explains. “The water acts more like a solid material and makes the stiffness transition from graphene and membrane less abrupt. We think this helps heat to conduct from graphene to the membrane side.”

The group varied the thickness of the intermediate water layer in simulations, and found that a 1-nanometer-wide layer of water helped to dissipate heat very effectively. In terms of the power applied to the system, they calculated that about a megawatt of power per meter squared, applied in tiny, microsecond bursts, was the most power that could be applied to the interface without overheating the cell membrane.

Qin says going forward, implant designers can use the group’s model and simulations to determine the critical power requirements for graphene devices of different dimensions. As for how they might practically control the thickness of the intermediate water layer, he says graphene’s surface may be modified to attract a particular number of water molecules.

“I think graphene provides a very promising candidate for implantable devices,” Qin says. “Our calculations can provide knowledge for designing these devices in the future, for specific applications, like sensors, monitors, and other biomedical applications.”

This research was supported in part by the MIT International Science and Technology Initiative (MISTI): MIT-China Seed Fund, the National Natural Science Foundation of China, DARPA [US Defense Advanced Research Projects Agency], the Department of Defense (DoD) Office of Naval Research, the DoD Multidisciplinary Research Initiatives program, the MIT Energy Initiative, and the National Science Foundation.

Here’s a link to and a citation for the paper,

Intercalated water layers promote thermal dissipation at bio–nano interfaces by Yanlei Wang, Zhao Qin, Markus J. Buehler, & Zhiping Xu. Nature Communications 7, Article number: 12854 doi:10.1038/ncomms12854 Published 23 September 2016

This paper is open access.

An examination of nanomanufacturing and nanofabrication

Michael Berger has written an Aug. 11, 2016 Nanowerk Spotlight review of a paper about nanomanufacturing (Note: A link has been removed),

… the path to greater benefits – whether economic, social, or environmental – from nanomanufactured goods and services is not yet clear. A recent review article in ACS Nano (“Nanomanufacturing: A Perspective”) by J. Alexander Liddle and Gregg M. Gallatin, takes silicon integrated circuit manufacturing as a baseline in order to consider the factors involved in matching processes with products, examining the characteristics and potential of top-down and bottom-up processes, and their combination.

The authors also discuss how a careful assessment of the way in which function can be made to follow form can enable high-volume manufacturing of nanoscale structures with the desired useful, and exciting, properties.

Although often used interchangeably, it makes sense to distinguish between nanofabrication and nanomanufacturing using the criterion of economic viability, suggested by the connotations of industrial scale and profitability associated with the word ‘manufacturing’.

Here’s a link to and a citation for the paper Berger is reviewing,

Nanomanufacturing: A Perspective by J. Alexander Liddle and Gregg M. Gallatin. ACS Nano, 2016, 10 (3), pp 2995–3014 DOI: 10.1021/acsnano.5b03299 Publication Date (Web): February 10, 2016

Copyright This article not subject to U.S. Copyright. Published 2016 by the American Chemical Society

This paper is behind a paywall.

Luckily for those who’d like a little more information before purchase, Berger’s review provides some insight into the study additional to what you’ll find in the abstract,

Nanomanufacturing, as the authors define it in their article, therefore, has the salient characteristic of being a source of money, while nanofabrication is often a sink.

To supply some background and indicate the scale of the nanomanufacturing challenge, the figure below shows the selling price ($·m-2) versus the annual production (m2) for a variety of nanoenabled or potentially nanoenabled products. The overall global market sizes are also indicated. It is interesting to note that the selling price spans 5 orders of magnitude, the production six, and the market size three. Although there is no strong correlation between the variables,
market price and size nanoenabled product
Log-log plot of the approximate product selling price ($·m-2) versus global annual production (m2) for a variety of nanoenabled, or potentially nanoenabled products. Approximate market sizes (2014) are shown next to each point. (Reprinted with permission by American Chemical Society)

market price and size nanoenabled product
Log-log plot of the approximate product selling price ($·m-2) versus global annual production (m2) for a variety of nanoenabled, or potentially nanoenabled products. Approximate market sizes (2014) are shown next to each point. (Reprinted with permission by American Chemical Society)

I encourage anyone interested in nanomanufacturing to read Berger’s article in its entirety as there is more detail and there are more figures to illustrate the points being made. He ends his review with this,

“Perhaps the most exciting prospect is that of creating dynamical nanoscale systems that are capable of exhibiting much richer structures and functionality. Whether this is achieved by learning how to control and engineer biological systems directly, or by building systems based on the same principles, remains to be seen, but will undoubtedly be disruptive and quite probably revolutionary.”

I find the reference to biological systems quite interesting especially in light of the recent launch of DARPA’s (US Defense Advanced Research Projects Agency) Engineered Living Materials (ELM) program (see my Aug. 9, 2016 posting).

Do you have a proposal for living building materials?

DARPA (US Defense Advanced Research Projects Agency) has launched a program called Engineered Living Materials (ELM) and issued an invitation. From an Aug. 9, 2016 news item on Nanowerk,

The structural materials that are currently used to construct homes, buildings, and infrastructure are expensive to produce and transport, wear out due to age and damage, and have limited ability to respond to changes in their immediate surroundings. Living biological materials—bone, skin, bark, and coral, for example—have attributes that provide advantages over the non-living materials people build with, in that they can be grown where needed, self-repair when damaged, and respond to changes in their surroundings. The inclusion of living materials in human-built environments could offer significant benefits; however, today scientists and engineers are unable to easily control the size and shape of living materials in ways that would make them useful for construction.

DARPA is launching the Engineered Living Materials (ELM) program with a goal of creating a new class of materials that combines the structural properties of traditional building materials with attributes of living systems. Living materials represent a new opportunity to leverage engineered biology to solve existing problems associated with the construction and maintenance of built environments, and to create new capabilities to craft smart infrastructure that dynamically responds to its surroundings.

An Aug. 5, 2016 DARPA news release, which originated the news item, explains further (Note: A link has been removed),

“The vision of the ELM program is to grow materials on demand where they are needed,” said ELM program manager Justin Gallivan. “Imagine that instead of shipping finished materials, we can ship precursors and rapidly grow them on site using local resources. And, since the materials will be alive, they will be able to respond to changes in their environment and heal themselves in response to damage.”

Grown materials are not entirely new, but their current manifestations differ substantially from the materials Gallivan envisions. For instance, biologically sourced structural materials can already be grown into specified sizes and shapes from inexpensive feedstocks; packing materials derived from fungal mycelium and building blocks made from bacteria and sand are two modern examples. And, of course, wood has been used for ages. However, these products are rendered inert during the manufacturing process, so they exhibit few of their components’ original biological advantages. Scientists are making progress with three-dimensional printing of living tissues and organs, using scaffolding materials that sustain the long-term viability of the living cells. These cells are derived from existing natural tissues, however, and are not engineered to perform synthetic functions. And current cell-printing methods are too expensive to produce building materials at necessary scales.

ELM looks to merge the best features of these existing technologies and build on them to create hybrid materials composed of non-living scaffolds that give structure to and support the long-term viability of engineered living cells. DARPA intends to develop platform technologies that are scalable and generalizable to facilitate a quick transition from laboratory to commercial applications.

The long-term objective of the ELM program is to develop an ability to engineer structural properties directly into the genomes of biological systems so that neither scaffolds nor external development cues are needed for an organism to realize the desired shape and properties. Achieving this goal will require significant breakthroughs in scientists’ understanding of developmental pathways and how those pathways direct the three-dimensional development of multicellular systems.

Work on ELM will be fundamental research carried out in controlled laboratory settings. DARPA does not anticipate environmental release during the program.

For anyone who’s interested in participating in the program, there’s an announcement (download the PDF for more details) featuring a Proposers Day event on Aug. 26, 2016 being held in Arlington, Virginia,

The Proposers Day objectives are:

1) To introduce the science and technology community (industry, academia, and government) to the ELM program vision and goals;

2) To facilitate interaction between investigators that may have capabilities to develop elements of interest and relevance to ELM goals; and

3) To encourage and promote teaming arrangements among organizations that have the relevant expertise, research facilities and capabilities for executing research and development responsive to the ELM program goals.

The Proposers Day will include overview presentations and optional sidebar meetings where potential proposers can discuss ideas for proposal submissions with the Government team.

The goal of the DARPA ELM program is to explore and develop living materials that combine the structural properties of traditional building materials with attributes of living systems, including the ability to rapidly grow, to self-repair, and to adapt to the environment. Living materials represent a new opportunity to leverage engineered biology to solve existing problems associated with the construction and maintenance of our built environments, as well as to create new capabilities to craft smart infrastructure that dynamically responds to our surroundings. The specific program objectives are to develop design tools and methods that enable the engineering of structural features into cellular systems that function as living materials, thereby opening up a new design space for building technology. These new methods will be validated by the production of living materials that can reproduce, self-organize and self-heal.

You can register for the event  here. Register by 12 pm (noon) ET on Aug. 23, 2016.

Nanotechnology and cybersecurity risks

Gregory Carpenter has written a gripping (albeit somewhat exaggerated) piece for Signal, a publication of the  Armed Forces Communications and Electronics Association (AFCEA) about cybersecurity issues and  nanomedicine endeavours. From Carpenter’s Jan. 1, 2016 article titled, When Lifesaving Technology Can Kill; The Cyber Edge,

The exciting advent of nanotechnology that has inspired disruptive and lifesaving medical advances is plagued by cybersecurity issues that could result in the deaths of people that these very same breakthroughs seek to heal. Unfortunately, nanorobotic technology has suffered from the same security oversights that afflict most other research and development programs.

Nanorobots, or small machines [or nanobots[, are vulnerable to exploitation just like other devices.

At the moment, the issue of cybersecurity exploitation is secondary to making nanobots, or nanorobots, dependably functional. As far as I’m aware, there is no such nanobot. Even nanoparticles meant to function as packages for drug delivery have not been perfected (see one of the controversies with nanomedicine drug delivery described in my Nov. 26, 2015 posting).

That said, Carpenter’s point about cybersecurity is well taken since security features are often overlooked in new technology. For example, automated banking machines (ABMs) had woefully poor (inadequate, almost nonexistent) security when they were first introduced.

Carpenter outlines some of the problems that could occur, assuming some of the latest research could be reliably  brought to market,

The U.S. military has joined the fray of nanorobotic experimentation, embarking on revolutionary research that could lead to a range of discoveries, from unraveling the secrets of how brains function to figuring out how to permanently purge bad memories. Academia is making amazing advances as well. Harnessing progress by Harvard scientists to move nanorobots within humans, researchers at the University of Montreal, Polytechnique Montreal and Centre Hospitalier Universitaire Sainte-Justine are using mobile nanoparticles inside the human brain to open the blood-brain barrier, which protects the brain from toxins found in the circulatory system.

A different type of technology presents a risk similar to the nanoparticles scenario. A DARPA-funded program known as Restoring Active Memory (RAM) addresses post-traumatic stress disorder, attempting to overcome memory deficits by developing neuroprosthetics that bridge gaps in an injured brain. In short, scientists can wipe out a traumatic memory, and they hope to insert a new one—one the person has never actually experienced. Someone could relish the memory of a stroll along the French Riviera rather than a terrible firefight, even if he or she has never visited Europe.

As an individual receives a disruptive memory, a cyber criminal could manage to hack the controls. Breaches of the brain could become a reality, putting humans at risk of becoming zombie hosts [emphasis mine] for future virus deployments. …

At this point, the ‘zombie’ scenario Carpenter suggests seems a bit over-the-top but it does hearken to the roots of the zombie myth where the undead aren’t mindlessly searching for brains but are humans whose wills have been overcome. Mike Mariani in an Oct. 28, 2015 article for The Atlantic has presented a thought-provoking history of zombies,

… the zombie myth is far older and more rooted in history than the blinkered arc of American pop culture suggests. It first appeared in Haiti in the 17th and 18th centuries, when the country was known as Saint-Domingue and ruled by France, which hauled in African slaves to work on sugar plantations. Slavery in Saint-Domingue under the French was extremely brutal: Half of the slaves brought in from Africa were worked to death within a few years, which only led to the capture and import of more. In the hundreds of years since, the zombie myth has been widely appropriated by American pop culture in a way that whitewashes its origins—and turns the undead into a platform for escapist fantasy.

The original brains-eating fiend was a slave not to the flesh of others but to his own. The zombie archetype, as it appeared in Haiti and mirrored the inhumanity that existed there from 1625 to around 1800, was a projection of the African slaves’ relentless misery and subjugation. Haitian slaves believed that dying would release them back to lan guinée, literally Guinea, or Africa in general, a kind of afterlife where they could be free. Though suicide was common among slaves, those who took their own lives wouldn’t be allowed to return to lan guinée. Instead, they’d be condemned to skulk the Hispaniola plantations for eternity, an undead slave at once denied their own bodies and yet trapped inside them—a soulless zombie.

I recommend reading Mariani’s article although I do have one nit to pick. I can’t find a reference to brain-eating zombies until George Romero’s introduction of the concept in his movies. This Zombie Wikipedia entry seems to be in agreement with my understanding (if I’m wrong, please do let me know and, if possible, provide a link to the corrective text).

Getting back to Carpenter and cybersecurity with regard to nanomedicine, while his scenarios may seem a trifle extreme it’s precisely the kind of thinking you need when attempting to anticipate problems. I do wish he’d made clear that the technology still has a ways to go.

DARPA (US Defense Advanced Research Project Agency) ‘Atoms to Product’ program launched

It took over a year after announcing the ‘Atoms to Product’ program in 2014 for DARPA (US Defense Advanced Research Projects Agency) to select 10 proponents for three projects. Before moving onto the latest announcement, here’s a description of the ‘Atoms to Product’ program from its Aug. 27, 2014 announcement on Nanowerk,

Many common materials exhibit different and potentially useful characteristics when fabricated at extremely small scales—that is, at dimensions near the size of atoms, or a few ten-billionths of a meter. These “atomic scale” or “nanoscale” properties include quantized electrical characteristics, glueless adhesion, rapid temperature changes, and tunable light absorption and scattering that, if available in human-scale products and systems, could offer potentially revolutionary defense and commercial capabilities. Two as-yet insurmountable technical challenges, however, stand in the way: Lack of knowledge of how to retain nanoscale properties in materials at larger scales, and lack of assembly capabilities for items between nanoscale and 100 microns—slightly wider than a human hair.

DARPA has created the Atoms to Product (A2P) program to help overcome these challenges. The program seeks to develop enhanced technologies for assembling atomic-scale pieces. It also seeks to integrate these components into materials and systems from nanoscale up to product scale in ways that preserve and exploit distinctive nanoscale properties.

DARPA’s Atoms to Product (A2P) program seeks to develop enhanced technologies for assembling nanoscale items, and integrating these components into materials and systems from nanoscale up to product scale in ways that preserve and exploit distinctive nanoscale properties.

A Dec. 29, 2015 news item on Nanowerk features the latest about the project,

DARPA recently selected 10 performers to tackle this challenge: Zyvex Labs, Richardson, Texas; SRI, Menlo Park, California; Boston University, Boston, Massachusetts; University of Notre Dame, South Bend, Indiana; HRL Laboratories, Malibu, California; PARC, Palo Alto, California; Embody, Norfolk, Virginia; Voxtel, Beaverton, Oregon; Harvard University, Cambridge, Massachusetts; and Draper Laboratory, Cambridge, Massachusetts.

A Dec. 29, 2015 DARPA news release, which originated the news item, offers more information and an image illustrating the type of advances already made by one of the successful proponents,

DARPA recently launched its Atoms to Product (A2P) program, with the goal of developing technologies and processes to assemble nanometer-scale pieces—whose dimensions are near the size of atoms—into systems, components, or materials that are at least millimeter-scale in size. At the heart of that goal was a frustrating reality: Many common materials, when fabricated at nanometer-scale, exhibit unique and attractive “atomic-scale” behaviors including quantized current-voltage behavior, dramatically lower melting points and significantly higher specific heats—but they tend to lose these potentially beneficial traits when they are manufactured at larger “product-scale” dimensions, typically on the order of a few centimeters, for integration into devices and systems.

“The ability to assemble atomic-scale pieces into practical components and products is the key to unlocking the full potential of micromachines,” said John Main, DARPA program manager. “The DARPA Atoms to Product Program aims to bring the benefits of microelectronic-style miniaturization to systems and products that combine mechanical, electrical, and chemical processes.”

The program calls for closing the assembly gap in two steps: From atoms to microns and from microns to millimeters. Performers are tasked with addressing one or both of these steps and have been assigned to one of three working groups, each with a distinct focus area.

A2P

Image caption: Microscopic tools such as this nanoscale “atom writer” can be used to fabricate minuscule light-manipulating structures on surfaces. DARPA has selected 10 performers for its Atoms to Product (A2P) program whose goal is to develop technologies and processes to assemble nanometer-scale pieces—whose dimensions are near the size of atoms—into systems, components, or materials that are at least millimeter-scale in size. (Image credit: Boston University)

Here’s more about the projects and the performers (proponents) from the A2P performers page on the DARPA website,

Nanometer to Millimeter in a Single System – Embody, Draper and Voxtel

Current methods to treat ligament injuries in warfighters [also known as, soldiers]—which account for a significant portion of reported injuries—often fail to restore pre-injury performance, due to surgical complexities and an inadequate supply of donor tissue. Embody is developing reinforced collagen nanofibers that mimic natural ligaments and replicate the biological and biomechanical properties of native tissue. Embody aims to create a new standard of care and restore pre-injury performance for warfighters and sports injury patients at a 50% reduction compared to current costs.

Radio Frequency (RF) systems (e.g., cell phones, GPS) have performance limits due to alternating current loss. In lower frequency power systems this is addressed by braiding the wires, but this is not currently possibly in cell phones due to an inability to manufacture sufficiently small braided wires. Draper is developing submicron wires that can be braided using DNA self-assembly methods. If successful, portable RF systems will be more power efficient and able to send 10 times more information in a given channel.

For seamless control of structures, physics and surface chemistry—from the atomic-level to the meter-level—Voxtel Inc. and partner Oregon State University are developing an efficient, high-rate, fluid-based manufacturing process designed to imitate nature’s ability to manufacture complex multimaterial products across scales. Historically, challenges relating to the cost of atomic-level control, production speed, and printing capability have been effectively insurmountable. This team’s new process will combine synthesis and delivery of materials into a massively parallel inkjet operation that draws from nature to achieve a DNA-like mediated assembly. The goal is to assemble complex, 3-D multimaterial mixed organic and inorganic products quickly and cost-effectively—directly from atoms.

Optical Metamaterial Assembly – Boston University, University of Notre Dame, HRL and PARC.

Nanoscale devices have demonstrated nearly unlimited power and functionality, but there hasn’t been a general- purpose, high-volume, low-cost method for building them. Boston University is developing an atomic calligraphy technique that can spray paint atoms with nanometer precision to build tunable optical metamaterials for the photonic battlefield. If successful, this capability could enhance the survivability of a wide range of military platforms, providing advanced camouflage and other optical illusions in the visual range much as stealth technology has enabled in the radar range.

The University of Notre Dame is developing massively parallel nanomanufacturing strategies to overcome the requirement today that most optical metamaterials must be fabricated in “one-off” operations. The Notre Dame project aims to design and build optical metamaterials that can be reconfigured to rapidly provide on-demand, customized optical capabilities. The aim is to use holographic traps to produce optical “tiles” that can be assembled into a myriad of functional forms and further customized by single-atom electrochemistry. Integrating these materials on surfaces and within devices could provide both warfighters and platforms with transformational survivability.

HRL Laboratories is working on a fast, scalable and material-agnostic process for improving infrared (IR) reflectivity of materials. Current IR-reflective materials have limited use, because reflectivity is highly dependent on the specific angle at which light hits the material. HRL is developing a technique for allowing tailorable infrared reflectivity across a variety of materials. If successful, the process will enable manufacturable materials with up to 98% IR reflectivity at all incident angles.

PARC is working on building the first digital MicroAssembly Printer, where the “inks” are micrometer-size particles and the “image” outputs are centimeter-scale and larger assemblies. The goal is to print smart materials with the throughput and cost of laser printers, but with the precision and functionality of nanotechnology. If successful, the printer would enable the short-run production of large, engineered, customized microstructures, such as metamaterials with unique responses for secure communications, surveillance and electronic warfare.

Flexible, General Purpose Assembly – Zyvex, SRI, and Harvard.

Zyvex aims to create nano-functional micron-scale devices using customizable and scalable manufacturing that is top-down and atomically precise. These high-performance electronic, optical, and nano-mechanical components would be assembled by SRI micro-robots into fully-functional devices and sub-systems such as ultra-sensitive sensors for threat detection, quantum communication devices, and atomic clocks the size of a grain of sand.

SRI’s Levitated Microfactories will seek to combine the precision of MEMS [micro-electromechanical systems] flexures with the versatility and range of pick-and-place robots and the scalability of swarms [an idea Michael Crichton used in his 2002 novel Prey to induce horror] to assemble and electrically connect micron and millimeter components to build stronger materials, faster electronics, and better sensors.

Many high-impact, minimally invasive surgical techniques are currently performed only by elite surgeons due to the lack of tactile feedback at such small scales relative to what is experienced during conventional surgical procedures. Harvard is developing a new manufacturing paradigm for millimeter-scale surgical tools using low-cost 2D layer-by-layer processes and assembly by folding, resulting in arbitrarily complex meso-scale 3D devices. The goal is for these novel tools to restore the necessary tactile feedback and thereby nurture a new degree of dexterity to perform otherwise demanding micro- and minimally invasive surgeries, and thus expand the availability of life-saving procedures.

Sidebar

‘Sidebar’ is my way of indicating these comments have little to do with the matter at hand but could be interesting factoids for you.

First, Zyvex Labs was last mentioned here in a Sept. 10, 2014 posting titled: OCSiAL will not be acquiring Zyvex. Notice that this  announcement was made shortly after DARPA’s A2P program was announced and that OCSiAL is one of RUSNANO’s (a Russian funding agency focused on nanotechnology) portfolio companies (see my Oct. 23, 2015 posting for more).

HRL Laboratories, mentioned here in an April 19, 2012 posting mostly concerned with memristors (nanoscale devices that mimic neural or synaptic plasticity), has its roots in Howard Hughes’s research laboratories as noted in the posting. In 2012, HRL was involved in another DARPA project, SyNAPSE.

Finally and minimally, PARC also known as, Xerox PARC, was made famous by Steven Jobs and Steve Wozniak when they set up their own company (Apple) basing their products on innovations that PARC had rejected. There are other versions of the story and one by Malcolm Gladwell for the New Yorker May 16, 2011 issue which presents a more complicated and, at times, contradictory version of that particular ‘origins’ story.