Tag Archives: University of Notre Dame

New chip for neuromorphic computing runs at a fraction of the energy of today’s systems

An August 17, 2022 news item on Nanowerk announces big (so to speak) claims from a team researching neuromorphic (brainlike) computer chips,

An international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of artificial intelligence (AI) applications–all at a fraction of the energy consumed by computing platforms for general-purpose AI computing.

The NeuRRAM neuromorphic chip brings AI a step closer to running on a broad range of edge devices, disconnected from the cloud, where they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server. Applications abound in every corner of the world and every facet of our lives, and range from smart watches, to VR headsets, smart earbuds, smart sensors in factories and rovers for space exploration.

The NeuRRAM chip is not only twice as energy efficient as the state-of-the-art “compute-in-memory” chips, an innovative class of hybrid chips that runs computations in memory, it also delivers results that are just as accurate as conventional digital chips. Conventional AI platforms are a lot bulkier and typically are constrained to using large data servers operating in the cloud.

In addition, the NeuRRAM chip is highly versatile and supports many different neural network models and architectures. As a result, the chip can be used for many different applications, including image recognition and reconstruction as well as voice recognition.

..

An August 17, 2022 University of California at San Diego (UCSD) news release (also on EurekAlert), which originated the news item, provides more detail than usually found in a news release,

“The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility,” said Weier Wan, the paper’s first corresponding author and a recent Ph.D. graduate of Stanford University who worked on the chip while at UC San Diego, where he was co-advised by Gert Cauwenberghs in the Department of Bioengineering. 

The research team, co-led by bioengineers at the University of California San Diego, presents their results in the Aug. 17 [2022] issue of Nature.

Currently, AI computing is both power hungry and computationally expensive. Most AI applications on edge devices involve moving data from the devices to the cloud, where the AI processes and analyzes it. Then the results are moved back to the device. That’s because most edge devices are battery-powered and as a result only have a limited amount of power that can be dedicated to computing. 

By reducing power consumption needed for AI inference at the edge, this NeuRRAM chip could lead to more robust, smarter and accessible edge devices and smarter manufacturing. It could also lead to better data privacy as the transfer of data from devices to the cloud comes with increased security risks. 

On AI chips, moving data from memory to computing units is one major bottleneck. 

“It’s the equivalent of doing an eight-hour commute for a two-hour work day,” Wan said. 

To solve this data transfer issue, researchers used what is known as resistive random-access memory, a type of non-volatile memory that allows for computation directly within memory rather than in separate computing units. RRAM and other emerging memory technologies used as synapse arrays for neuromorphic computing were pioneered in the lab of Philip Wong, Wan’s advisor at Stanford and a main contributor to this work. Computation with RRAM chips is not necessarily new, but generally it leads to a decrease in the accuracy of the computations performed on the chip and a lack of flexibility in the chip’s architecture. 

“Compute-in-memory has been common practice in neuromorphic engineering since it was introduced more than 30 years ago,” Cauwenberghs said.  “What is new with NeuRRAM is that the extreme efficiency now goes together with great flexibility for diverse AI applications with almost no loss in accuracy over standard digital general-purpose compute platforms.”

A carefully crafted methodology was key to the work with multiple levels of “co-optimization” across the abstraction layers of hardware and software, from the design of the chip to its configuration to run various AI tasks. In addition, the team made sure to account for various constraints that span from memory device physics to circuits and network architecture. 

“This chip now provides us with a platform to address these problems across the stack from devices and circuits to algorithms,” said Siddharth Joshi, an assistant professor of computer science and engineering at the University of Notre Dame , who started working on the project as a Ph.D. student and postdoctoral researcher in Cauwenberghs lab at UC San Diego. 

Chip performance

Researchers measured the chip’s energy efficiency by a measure known as energy-delay product, or EDP. EDP combines both the amount of energy consumed for every operation and the amount of times it takes to complete the operation. By this measure, the NeuRRAM chip achieves 1.6 to 2.3 times lower EDP (lower is better) and 7 to 13 times higher computational density than state-of-the-art chips. 

Researchers ran various AI tasks on the chip. It achieved 99% accuracy on a handwritten digit recognition task; 85.7% on an image classification task; and 84.7% on a Google speech command recognition task. In addition, the chip also achieved a 70% reduction in image-reconstruction error on an image-recovery task. These results are comparable to existing digital chips that perform computation under the same bit-precision, but with drastic savings in energy. 

Researchers point out that one key contribution of the paper is that all the results featured are obtained directly on the hardware. In many previous works of compute-in-memory chips, AI benchmark results were often obtained partially by software simulation. 

Next steps include improving architectures and circuits and scaling the design to more advanced technology nodes. Researchers also plan to tackle other applications, such as spiking neural networks.

“We can do better at the device level, improve circuit design to implement additional features and address diverse applications with our dynamic NeuRRAM platform,” said Rajkumar Kubendran, an assistant professor for the University of Pittsburgh, who started work on the project while a Ph.D. student in Cauwenberghs’ research group at UC San Diego.

In addition, Wan is a founding member of a startup that works on productizing the compute-in-memory technology. “As a researcher and  an engineer, my ambition is to bring research innovations from labs into practical use,” Wan said. 

New architecture 

The key to NeuRRAM’s energy efficiency is an innovative method to sense output in memory. Conventional approaches use voltage as input and measure current as the result. But this leads to the need for more complex and more power hungry circuits. In NeuRRAM, the team engineered a neuron circuit that senses voltage and performs analog-to-digital conversion in an energy efficient manner. This voltage-mode sensing can activate all the rows and all the columns of an RRAM array in a single computing cycle, allowing higher parallelism. 

In the NeuRRAM architecture, CMOS neuron circuits are physically interleaved with RRAM weights. It differs from conventional designs where CMOS circuits are typically on the peripheral of RRAM weights.The neuron’s connections with the RRAM array can be configured to serve as either input or output of the neuron. This allows neural network inference in various data flow directions without incurring overheads in area or power consumption. This in turn makes the architecture easier to reconfigure. 

To make sure that accuracy of the AI computations can be preserved across various neural network architectures, researchers developed a set of hardware algorithm co-optimization techniques. The techniques were verified on various neural networks including convolutional neural networks, long short-term memory, and restricted Boltzmann machines. 

As a neuromorphic AI chip, NeuroRRAM performs parallel distributed processing across 48 neurosynaptic cores. To simultaneously achieve high versatility and high efficiency, NeuRRAM supports data-parallelism by mapping a layer in the neural network model onto multiple cores for parallel inference on multiple data. Also, NeuRRAM offers model-parallelism by mapping different layers of a model onto different cores and performing inference in a pipelined fashion.

An international research team

The work is the result of an international team of researchers. 

The UC San Diego team designed the CMOS circuits that implement the neural functions interfacing with the RRAM arrays to support the synaptic functions in the chip’s architecture, for high efficiency and versatility. Wan, working closely with the entire team, implemented the design; characterized the chip; trained the AI models; and executed the experiments. Wan also developed a software toolchain that maps AI applications onto the chip. 

The RRAM synapse array and its operating conditions were extensively characterized and optimized at Stanford University. 

The RRAM array was fabricated and integrated onto CMOS at Tsinghua University. 

The Team at Notre Dame contributed to both the design and architecture of the chip and the subsequent machine learning model design and training.

The research started as part of the National Science Foundation funded Expeditions in Computing project on Visual Cortex on Silicon at Penn State University, with continued funding support from the Office of Naval Research Science of AI program, the Semiconductor Research Corporation and DARPA [{US} Defense Advanced Research Projects Agency] JUMP program, and Western Digital Corporation. 

Here’s a link to and a citation for the paper,

A compute-in-memory chip based on resistive random-access memory by Weier Wan, Rajkumar Kubendran, Clemens Schaefer, Sukru Burc Eryilmaz, Wenqiang Zhang, Dabin Wu, Stephen Deiss, Priyanka Raina, He Qian, Bin Gao, Siddharth Joshi, Huaqiang Wu, H.-S. Philip Wong & Gert Cauwenberghs. Nature volume 608, pages 504–512 (2022) DOI: https://doi.org/10.1038/s41586-022-04992-8 Published: 17 August 2022 Issue Date: 18 August 2022

This paper is open access.

Detecting peanut allergies with nanoparticles

Researchers at Notre Dame University are designing a platform that will make allergy detection easier and more precise according to a June 26, 2017 news item on phys.org,

Researchers have developed a novel platform to more accurately detect and identify the presence and severity of peanut allergies, without directly exposing patients to the allergen, according to a new study published in the journal Scientific Reports.

A team of chemical and biomolecular engineers at the University of Notre Dame designed nanoparticles that mimic natural allergens by displaying each allergic component one at a time on their surfaces. The researchers named the nanoparticles “nanoallergens” and used them to dissect the critical components of major peanut allergy proteins and evaluate the potency of the allergic response using the antibodies present in a blood sample from a patient.

“The goal of this study was to show how nanoallergen technology could be used to provide a clearer and more accurate assessment of the severity of an allergic condition,” said Basar Bilgicer, associate professor of chemical and biomolecular engineering and a member of the Advanced Diagnostics and Therapeutics initiative at Notre Dame. “We are currently working with allergy specialist clinicians for further testing and verification of the diagnostic tool using a larger patient population. Ultimately, our vision is to take this technology and make it available to all people who suffer from food allergies.”

A June 26, 2017 University of Notre Dame news release, which originated the news item, explains the need for better allergy detection,

Food allergies are a growing problem in developing countries and are of particular concern to parents. According to the study, 8 percent of children under the age of 4 have a food allergy. Bilgicer said a need exists for more accurate testing, improved diagnostics and better treatment options.

Current food allergy testing methods carry risks or fail to provide detailed information on the severity of the allergic response. For instance, a test known as the oral food challenge requires exposing a patient to increasing amounts of a suspected allergen. Patients must remain under close observation in clinics with highly trained specialists. The test is stopped only when the patient exhibits an extreme allergic response, such as anaphylactic shock. Doctors then treat the reaction with epinephrine injections, antihistamines and steroids.

The skin prick test, another common diagnostic tool, can indicate whether a patient is allergic to a particular food. However, it provides no detail on the severity of those allergies.

During skin prick testing, doctors place a drop of liquid containing the allergen on the patient’s skin, typically on their back, and then scratch the skin to expose the patient. Skin irritations, such as redness, itching and white bumps, are indications that the patient has an allergy.

“Most of the time, parents of children with food allergies are not inclined to have their child go through such excruciating experiences of a food challenge,” Bilgicer said. “Rather than investigate the severity of the allergy, they respond to it with most extreme caution and complete avoidance of the allergen. Meanwhile, there are cases where the skin prick test might have yielded a positive result for a child, and yet the child can consume a handful of the allergen and demonstrate no signs of any allergic response.”

While the study focused on peanut allergens, Bilgicer said he and his team are working on testing the platform on additional allergens and allergic conditions.

Here’s a link to and a citation for the paper,

Determination of Crucial Immunogenic Epitopes in Major Peanut Allergy Protein, Ara h2, via Novel Nanoallergen Platform by Peter E. Deak, Maura R. Vrabel, Tanyel Kiziltepe & Basar Bilgicer. Scientific Reports 7, Article number: 3981 (2017) doi:10.1038/s41598-017-04268-6 Published online: 21 June 2017

This paper is open access.

DARPA (US Defense Advanced Research Project Agency) ‘Atoms to Product’ program launched

It took over a year after announcing the ‘Atoms to Product’ program in 2014 for DARPA (US Defense Advanced Research Projects Agency) to select 10 proponents for three projects. Before moving onto the latest announcement, here’s a description of the ‘Atoms to Product’ program from its Aug. 27, 2014 announcement on Nanowerk,

Many common materials exhibit different and potentially useful characteristics when fabricated at extremely small scales—that is, at dimensions near the size of atoms, or a few ten-billionths of a meter. These “atomic scale” or “nanoscale” properties include quantized electrical characteristics, glueless adhesion, rapid temperature changes, and tunable light absorption and scattering that, if available in human-scale products and systems, could offer potentially revolutionary defense and commercial capabilities. Two as-yet insurmountable technical challenges, however, stand in the way: Lack of knowledge of how to retain nanoscale properties in materials at larger scales, and lack of assembly capabilities for items between nanoscale and 100 microns—slightly wider than a human hair.

DARPA has created the Atoms to Product (A2P) program to help overcome these challenges. The program seeks to develop enhanced technologies for assembling atomic-scale pieces. It also seeks to integrate these components into materials and systems from nanoscale up to product scale in ways that preserve and exploit distinctive nanoscale properties.

DARPA’s Atoms to Product (A2P) program seeks to develop enhanced technologies for assembling nanoscale items, and integrating these components into materials and systems from nanoscale up to product scale in ways that preserve and exploit distinctive nanoscale properties.

A Dec. 29, 2015 news item on Nanowerk features the latest about the project,

DARPA recently selected 10 performers to tackle this challenge: Zyvex Labs, Richardson, Texas; SRI, Menlo Park, California; Boston University, Boston, Massachusetts; University of Notre Dame, South Bend, Indiana; HRL Laboratories, Malibu, California; PARC, Palo Alto, California; Embody, Norfolk, Virginia; Voxtel, Beaverton, Oregon; Harvard University, Cambridge, Massachusetts; and Draper Laboratory, Cambridge, Massachusetts.

A Dec. 29, 2015 DARPA news release, which originated the news item, offers more information and an image illustrating the type of advances already made by one of the successful proponents,

DARPA recently launched its Atoms to Product (A2P) program, with the goal of developing technologies and processes to assemble nanometer-scale pieces—whose dimensions are near the size of atoms—into systems, components, or materials that are at least millimeter-scale in size. At the heart of that goal was a frustrating reality: Many common materials, when fabricated at nanometer-scale, exhibit unique and attractive “atomic-scale” behaviors including quantized current-voltage behavior, dramatically lower melting points and significantly higher specific heats—but they tend to lose these potentially beneficial traits when they are manufactured at larger “product-scale” dimensions, typically on the order of a few centimeters, for integration into devices and systems.

“The ability to assemble atomic-scale pieces into practical components and products is the key to unlocking the full potential of micromachines,” said John Main, DARPA program manager. “The DARPA Atoms to Product Program aims to bring the benefits of microelectronic-style miniaturization to systems and products that combine mechanical, electrical, and chemical processes.”

The program calls for closing the assembly gap in two steps: From atoms to microns and from microns to millimeters. Performers are tasked with addressing one or both of these steps and have been assigned to one of three working groups, each with a distinct focus area.

A2P

Image caption: Microscopic tools such as this nanoscale “atom writer” can be used to fabricate minuscule light-manipulating structures on surfaces. DARPA has selected 10 performers for its Atoms to Product (A2P) program whose goal is to develop technologies and processes to assemble nanometer-scale pieces—whose dimensions are near the size of atoms—into systems, components, or materials that are at least millimeter-scale in size. (Image credit: Boston University)

Here’s more about the projects and the performers (proponents) from the A2P performers page on the DARPA website,

Nanometer to Millimeter in a Single System – Embody, Draper and Voxtel

Current methods to treat ligament injuries in warfighters [also known as, soldiers]—which account for a significant portion of reported injuries—often fail to restore pre-injury performance, due to surgical complexities and an inadequate supply of donor tissue. Embody is developing reinforced collagen nanofibers that mimic natural ligaments and replicate the biological and biomechanical properties of native tissue. Embody aims to create a new standard of care and restore pre-injury performance for warfighters and sports injury patients at a 50% reduction compared to current costs.

Radio Frequency (RF) systems (e.g., cell phones, GPS) have performance limits due to alternating current loss. In lower frequency power systems this is addressed by braiding the wires, but this is not currently possibly in cell phones due to an inability to manufacture sufficiently small braided wires. Draper is developing submicron wires that can be braided using DNA self-assembly methods. If successful, portable RF systems will be more power efficient and able to send 10 times more information in a given channel.

For seamless control of structures, physics and surface chemistry—from the atomic-level to the meter-level—Voxtel Inc. and partner Oregon State University are developing an efficient, high-rate, fluid-based manufacturing process designed to imitate nature’s ability to manufacture complex multimaterial products across scales. Historically, challenges relating to the cost of atomic-level control, production speed, and printing capability have been effectively insurmountable. This team’s new process will combine synthesis and delivery of materials into a massively parallel inkjet operation that draws from nature to achieve a DNA-like mediated assembly. The goal is to assemble complex, 3-D multimaterial mixed organic and inorganic products quickly and cost-effectively—directly from atoms.

Optical Metamaterial Assembly – Boston University, University of Notre Dame, HRL and PARC.

Nanoscale devices have demonstrated nearly unlimited power and functionality, but there hasn’t been a general- purpose, high-volume, low-cost method for building them. Boston University is developing an atomic calligraphy technique that can spray paint atoms with nanometer precision to build tunable optical metamaterials for the photonic battlefield. If successful, this capability could enhance the survivability of a wide range of military platforms, providing advanced camouflage and other optical illusions in the visual range much as stealth technology has enabled in the radar range.

The University of Notre Dame is developing massively parallel nanomanufacturing strategies to overcome the requirement today that most optical metamaterials must be fabricated in “one-off” operations. The Notre Dame project aims to design and build optical metamaterials that can be reconfigured to rapidly provide on-demand, customized optical capabilities. The aim is to use holographic traps to produce optical “tiles” that can be assembled into a myriad of functional forms and further customized by single-atom electrochemistry. Integrating these materials on surfaces and within devices could provide both warfighters and platforms with transformational survivability.

HRL Laboratories is working on a fast, scalable and material-agnostic process for improving infrared (IR) reflectivity of materials. Current IR-reflective materials have limited use, because reflectivity is highly dependent on the specific angle at which light hits the material. HRL is developing a technique for allowing tailorable infrared reflectivity across a variety of materials. If successful, the process will enable manufacturable materials with up to 98% IR reflectivity at all incident angles.

PARC is working on building the first digital MicroAssembly Printer, where the “inks” are micrometer-size particles and the “image” outputs are centimeter-scale and larger assemblies. The goal is to print smart materials with the throughput and cost of laser printers, but with the precision and functionality of nanotechnology. If successful, the printer would enable the short-run production of large, engineered, customized microstructures, such as metamaterials with unique responses for secure communications, surveillance and electronic warfare.

Flexible, General Purpose Assembly – Zyvex, SRI, and Harvard.

Zyvex aims to create nano-functional micron-scale devices using customizable and scalable manufacturing that is top-down and atomically precise. These high-performance electronic, optical, and nano-mechanical components would be assembled by SRI micro-robots into fully-functional devices and sub-systems such as ultra-sensitive sensors for threat detection, quantum communication devices, and atomic clocks the size of a grain of sand.

SRI’s Levitated Microfactories will seek to combine the precision of MEMS [micro-electromechanical systems] flexures with the versatility and range of pick-and-place robots and the scalability of swarms [an idea Michael Crichton used in his 2002 novel Prey to induce horror] to assemble and electrically connect micron and millimeter components to build stronger materials, faster electronics, and better sensors.

Many high-impact, minimally invasive surgical techniques are currently performed only by elite surgeons due to the lack of tactile feedback at such small scales relative to what is experienced during conventional surgical procedures. Harvard is developing a new manufacturing paradigm for millimeter-scale surgical tools using low-cost 2D layer-by-layer processes and assembly by folding, resulting in arbitrarily complex meso-scale 3D devices. The goal is for these novel tools to restore the necessary tactile feedback and thereby nurture a new degree of dexterity to perform otherwise demanding micro- and minimally invasive surgeries, and thus expand the availability of life-saving procedures.

Sidebar

‘Sidebar’ is my way of indicating these comments have little to do with the matter at hand but could be interesting factoids for you.

First, Zyvex Labs was last mentioned here in a Sept. 10, 2014 posting titled: OCSiAL will not be acquiring Zyvex. Notice that this  announcement was made shortly after DARPA’s A2P program was announced and that OCSiAL is one of RUSNANO’s (a Russian funding agency focused on nanotechnology) portfolio companies (see my Oct. 23, 2015 posting for more).

HRL Laboratories, mentioned here in an April 19, 2012 posting mostly concerned with memristors (nanoscale devices that mimic neural or synaptic plasticity), has its roots in Howard Hughes’s research laboratories as noted in the posting. In 2012, HRL was involved in another DARPA project, SyNAPSE.

Finally and minimally, PARC also known as, Xerox PARC, was made famous by Steven Jobs and Steve Wozniak when they set up their own company (Apple) basing their products on innovations that PARC had rejected. There are other versions of the story and one by Malcolm Gladwell for the New Yorker May 16, 2011 issue which presents a more complicated and, at times, contradictory version of that particular ‘origins’ story.

65 + and another poll about nanotechnology awareness

As soon as you reach the age of 65, you cease to develop as a human being and nobody really cares about your opinions. The same is true of you prior to the age of 18. You are of interest from 18 to 29, more interest from 30-39, and 40-49 but by the age of 50, you hold diminishing interest (50-64) and after that it almost disappers. At least, that’s what I’m deducing from these standard age categories.

We don’t think a 25 year old and a 45 year old belong in the same category but have no problem putting a 65 year old and an 85 year old in the same category.  Interesting, non?

While the latest nanotechnology poll from Harris Interactive doesn’t break any new ground regarding age categories or ways to ask about nanotechnology awareness (How much have you heard about nanotechnology?) or results (low awareness), Harris offers a very interesting proviso about the poll results,

Methodology

This Harris Poll was conducted online within the United States between June 18 and 25, 2012 among 2,467 adults (aged 18 and over). Figures for age, sex, race/ethnicity, education, region and household income were weighted where necessary to bring them into line with their actual proportions in the population. Propensity score weighting was also used to adjust for respondents’ propensity to be online.

All sample surveys and polls, whether or not they use probability sampling, are subject to multiple sources of error which are most often not possible to quantify or estimate, including sampling error, coverage error, error associated with nonresponse, error associated with question wording and response options, and post-survey weighting and adjustments. Therefore, Harris Interactive avoids the words “margin of error” as they are misleading. [emphases mine] All that can be calculated are different possible sampling errors with different probabilities for pure, unweighted, random samples with 100% response rates. These are only theoretical because no published polls come close to this ideal.

Respondents for this survey were selected from among those who have agreed to participate in Harris Interactive surveys. The data have been weighted to reflect the composition of the adult population. Because the sample is based on those who agreed to participate in the Harris Interactive panel, no estimates of theoretical sampling error can be calculated.

I don’t know if this is a standard wording or if it’s unique to Harris but it’s certainly the first time I’ve seen a statement that the term ‘margin of error’ is misleading. Coupling it with a frank description of the possible errors and suggesting there may be even more sources for error is refreshing. I also very much appreciate the fact that they’ve shown the questions although I  would like to confirm the order in which they were asked (which I imagine is in the order shown).

A Sept. 6, 2012 news item on Nanowerk summarizes the poll results,

Awareness of nanotechnology is still low, but there are some surprising differences in opinion. Perhaps not surprisingly, reports of having heard at least a little about nanotechnology were significantly higher among all sub-65 age groups (ranging from 37% to 46%) than among those in the 65+ age group (26%). However, those older Americans aware of nanotechnology were more optimistic about its potential, with a stronger likelihood than any other age group to indicate a belief that the potential benefits of nanotechnology outweigh the risks (58%, vs. 32%-36% among other age groups).

The Sept. 6, 2012 press release from Harris Interactive (which originated the news item) provides more details including the wording of the questions and tables summarizing the data. Here are a few tidbits from the press release,

Older Americans aware of nanotechnology were significantly more interested than other age groups in seeing it applied to healthcare (80%-83% among those ages 50+, vs. 42%-66% among younger groups), energy production (63%-74% among those 40+, vs. 43%-53% among those under 40), whereas younger adults familiar with nanotechnology were more interested in seeing nanotechnology applied to clothes (16%-19% among those 18-39, vs. 4%-9% among those 40+) and skincare (20% and 10%-12%, respectively) than the older groups. The youngest age group was also significantly more likely than other groups to select “None of these” (15% among those 18-29, vs. 2%-6% among those 30+).

“Though it may initially seem counterintuitive, it actually makes sense that those aware of nanotechnology within the 65+ age group tend to believe that the benefits of nanotechnology will outweigh the risks, as the prevalence of worry in general tends to decline with age,” said Dr. Kathleen Eggleson, leader of the Nano Impacts Intellectual Community at the University of Notre Dame. “Older Americans also have firsthand experience with the emergence of many different technologies that have brought new benefits to their lives.”

“These data may help stakeholders nationwide make informed decisions, plan investments, and tailor education, advocacy, and marketing efforts in the nanotechnology field,” said Peter Tomanovich, Research Director, Health Care at Harris Interactive.

The poll also has information (taking all provisos into account) about US regional differences in awareness and sources for information amongst those who are aware here.

There is no indication in the press release that this poll was requested or paid for by any Harris Interactive client. Based on Tomanovich’s comments, the poll seems to  have been conducted at the company’s own expense as a means of gaining some attention within their government and business client base.

In any event, the poll provides an interesting contrast to the recent article in Nature about nanotechnology and terrorism (mentioned in my Aug. 31, 2012 posting) which suggested there may be a rising tide of violence against nanoscience and nanotechnology based on the bombings in Mexico and other incidents on the international stage.