Tag Archives: University of California at Santa Barbara

The challenges of wet adhesion and how nature solves the problem

I usually post about dry adhesion, that is, sticking to dry surfaces in the way a gecko might. This particular piece concerns wet adhesion, a matter of particular interest in medicine where you want bandages and such to stick to a wet surface and in marine circles where they want barnacles and such to stop adhering to boat and ship hulls.

An Aug. 6, 2015 news item on ScienceDaily touts ‘wet adhesion’ research focused on medicinal matters,

Wet adhesion is a true engineering challenge. Marine animals such as mussels, oysters and barnacles are naturally equipped with the means to adhere to rock, buoys and other underwater structures and remain in place no matter how strong the waves and currents.

Synthetic wet adhesive materials, on the other hand, are a different story.

Taking their cue from Mother Nature and the chemical composition of mussel foot proteins, the Alison Butler Lab at UC [University of California] Santa Barbara [UCSB] decided to improve a small molecule called the siderophore cyclic trichrysobactin (CTC) that they had previously discovered. They modified the molecule and then tested its adhesive strength in aqueous environments. The result: a compound that rivals the staying power of mussel glue.

An Aug. 6, 2015 UCSB news release by Julie Cohen, which originated the news item, describes some of the reasons for the research, the interdisciplinary aspect of the collaboration, and technical details of the work (Note: Links have been removed),

“There’s real need in a lot of environments, including medicine, to be able to have glues that would work in an aqueous environment,” said co-author Butler, a professor in UCSB’s Department of Chemistry and Biochemistry. “So now we have the basis of what we might try to develop from here.”

Also part of the interdisciplinary effort were Jacob Israelachvili’s Interfacial Sciences Lab in UCSB’s Department of Chemical Engineering and J. Herbert Waite, a professor in the Department of Molecular, Cellular and Developmental Biology, whose own work focuses on wet adhesion.

“We just happened to see a visual similarity between compounds in the siderophore CTC and in mussel foot proteins,” Butler explained. Siderophores are molecules that bind and transport iron in microorganisms such as bacteria. “We specifically looked at the synergy between the role of the amino acid lysine and catechol,” she added. “Both are present in mussel foot proteins and in CTC.”
Mussel foot proteins contain similar amounts of lysine and the catechol dopa. Catechols are chemical compounds used in such biological functions as neurotransmission. However, certain proteins have adopted dopa for adhesive purposes.

From discussions with Waite, Butler realized that CTC contained not only lysine but also a compound similar to dopa. Further, CTC paired its catechol with lysine, just like mussel foot proteins do.

“We developed a better, more stable molecule than the actual CTC,” Butler explained. “Then we modified it to tease out the importance of the contributions from either lysine or the catechol.”

Co-lead author Greg Maier, a graduate student in the Butler Lab, created six different compounds with varying amounts of lysine and catechol. The Israelachvili lab tested each compound for its surface and adhesion characteristics. Co-lead author Michael Rapp used a surface force apparatus developed in the lab to measure the interactions between mica surfaces in a saline solution.

Only the two compounds containing a cationic amine, such as lysine, and catechol exhibited adhesive strength and a reduced intervening film thickness, which measures the amount two surfaces can be squeezed together. Compounds without catechol had greatly diminished adhesion levels but a similarly reduced film thickness. Without lysine, the compounds displayed neither characteristic. “Our tests showed that lysine was key, helping to remove salt ions from the surface to allow the glue to get to the underlying surface,” Maier said.

“By looking at a different biosystem that has similar characteristics to some of the best-performing mussel glues, we were able to deduce that these two small components work together synergistically to create a favorable environment at surfaces to promote adherence,” explained Rapp, a chemical engineering graduate student. “Our results demonstrate that these two molecular groups not only prime the surface but also work collectively to build better adhesives that stick to surfaces.”

“In a nutshell, our discovery is that you need lysine and you need the catechol,” Butler concluded. “There’s a one-two punch: the lysine clears and primes the surface and the catechol comes down and hydrogen bonds to the mica surface. This is an unprecedented insight about what needs to happen during wet adhesion.”

Here’s a link to and a citation for the paper,

Adaptive synergy between catechol and lysine promotes wet adhesion by surface salt displacement by Greg P. Maier, Michael V. Rapp, J. Herbert Waite, Jacob N. Israelachvili, and Alison Butler. Science 7 August 2015: Vol. 349 no. 6248 pp. 628-632 DOI: 10.1126/science.aab055

This paper is behind a paywall.

I have previously written about mussels and wet adhesion in a Dec. 13, 2012 posting regarding some research at the University of British Columbia (Canada). As for dry adhesion, there’s my June 11, 2014 posting titled: Climb like a gecko (in DARPA’s [US Defense Advanced Research Projects Agency] Z-Man program) amongst others.

Memristor, memristor, you are popular

Regular readers know I have a long-standing interest in memristor and artificial brains. I have three memristor-related pieces of research,  published in the last month or so, for this post.

First, there’s some research into nano memory at RMIT University, Australia, and the University of California at Santa Barbara (UC Santa Barbara). From a May 12, 2015 news item on ScienceDaily,

RMIT University researchers have mimicked the way the human brain processes information with the development of an electronic long-term memory cell.

Researchers at the MicroNano Research Facility (MNRF) have built the one of the world’s first electronic multi-state memory cell which mirrors the brain’s ability to simultaneously process and store multiple strands of information.

The development brings them closer to imitating key electronic aspects of the human brain — a vital step towards creating a bionic brain — which could help unlock successful treatments for common neurological conditions such as Alzheimer’s and Parkinson’s diseases.

A May 11, 2015 RMIT University news release, which originated the news item, reveals more about the researchers’ excitement and about the research,

“This is the closest we have come to creating a brain-like system with memory that learns and stores analog information and is quick at retrieving this stored information,” Dr Sharath said.

“The human brain is an extremely complex analog computer… its evolution is based on its previous experiences, and up until now this functionality has not been able to be adequately reproduced with digital technology.”

The ability to create highly dense and ultra-fast analog memory cells paves the way for imitating highly sophisticated biological neural networks, he said.

The research builds on RMIT’s previous discovery where ultra-fast nano-scale memories were developed using a functional oxide material in the form of an ultra-thin film – 10,000 times thinner than a human hair.

Dr Hussein Nili, lead author of the study, said: “This new discovery is significant as it allows the multi-state cell to store and process information in the very same way that the brain does.

“Think of an old camera which could only take pictures in black and white. The same analogy applies here, rather than just black and white memories we now have memories in full color with shade, light and texture, it is a major step.”

While these new devices are able to store much more information than conventional digital memories (which store just 0s and 1s), it is their brain-like ability to remember and retain previous information that is exciting.

“We have now introduced controlled faults or defects in the oxide material along with the addition of metallic atoms, which unleashes the full potential of the ‘memristive’ effect – where the memory element’s behaviour is dependent on its past experiences,” Dr Nili said.

Nano-scale memories are precursors to the storage components of the complex artificial intelligence network needed to develop a bionic brain.

Dr Nili said the research had myriad practical applications including the potential for scientists to replicate the human brain outside of the body.

“If you could replicate a brain outside the body, it would minimise ethical issues involved in treating and experimenting on the brain which can lead to better understanding of neurological conditions,” Dr Nili said.

The research, supported by the Australian Research Council, was conducted in collaboration with the University of California Santa Barbara.

Here’s a link to and a citation for this memristive nano device,

Donor-Induced Performance Tuning of Amorphous SrTiO3 Memristive Nanodevices: Multistate Resistive Switching and Mechanical Tunability by  Hussein Nili, Sumeet Walia, Ahmad Esmaielzadeh Kandjani, Rajesh Ramanathan, Philipp Gutruf, Taimur Ahmed, Sivacarendran Balendhran, Vipul Bansal, Dmitri B. Strukov, Omid Kavehei, Madhu Bhaskaran, and Sharath Sriram. Advanced Functional Materials DOI: 10.1002/adfm.201501019 Article first published online: 14 APR 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

The second published piece of memristor-related research comes from a UC Santa Barbara and  Stony Brook University (New York state) team but is being publicized by UC Santa Barbara. From a May 11, 2015 news item on Nanowerk (Note: A link has been removed),

In what marks a significant step forward for artificial intelligence, researchers at UC Santa Barbara have demonstrated the functionality of a simple artificial neural circuit (Nature, “Training and operation of an integrated neuromorphic network based on metal-oxide memristors”). For the first time, a circuit of about 100 artificial synapses was proved to perform a simple version of a typical human task: image classification.

A May 11, 2015 UC Santa Barbara news release (also on EurekAlert)by Sonia Fernandez, which originated the news item, situates this development within the ‘artificial brain’ effort while describing it in more detail (Note: A link has been removed),

“It’s a small, but important step,” said Dmitri Strukov, a professor of electrical and computer engineering. With time and further progress, the circuitry may eventually be expanded and scaled to approach something like the human brain’s, which has 1015 (one quadrillion) synaptic connections.

For all its errors and potential for faultiness, the human brain remains a model of computational power and efficiency for engineers like Strukov and his colleagues, Mirko Prezioso, Farnood Merrikh-Bayat, Brian Hoskins and Gina Adam. That’s because the brain can accomplish certain functions in a fraction of a second what computers would require far more time and energy to perform.

… As you read this, your brain is making countless split-second decisions about the letters and symbols you see, classifying their shapes and relative positions to each other and deriving different levels of meaning through many channels of context, in as little time as it takes you to scan over this print. Change the font, or even the orientation of the letters, and it’s likely you would still be able to read this and derive the same meaning.

In the researchers’ demonstration, the circuit implementing the rudimentary artificial neural network was able to successfully classify three letters (“z”, “v” and “n”) by their images, each letter stylized in different ways or saturated with “noise”. In a process similar to how we humans pick our friends out from a crowd, or find the right key from a ring of similar keys, the simple neural circuitry was able to correctly classify the simple images.

“While the circuit was very small compared to practical networks, it is big enough to prove the concept of practicality,” said Merrikh-Bayat. According to Gina Adam, as interest grows in the technology, so will research momentum.

“And, as more solutions to the technological challenges are proposed the technology will be able to make it to the market sooner,” she said.

Key to this technology is the memristor (a combination of “memory” and “resistor”), an electronic component whose resistance changes depending on the direction of the flow of the electrical charge. Unlike conventional transistors, which rely on the drift and diffusion of electrons and their holes through semiconducting material, memristor operation is based on ionic movement, similar to the way human neural cells generate neural electrical signals.

“The memory state is stored as a specific concentration profile of defects that can be moved back and forth within the memristor,” said Strukov. The ionic memory mechanism brings several advantages over purely electron-based memories, which makes it very attractive for artificial neural network implementation, he added.

“For example, many different configurations of ionic profiles result in a continuum of memory states and hence analog memory functionality,” he said. “Ions are also much heavier than electrons and do not tunnel easily, which permits aggressive scaling of memristors without sacrificing analog properties.”

This is where analog memory trumps digital memory: In order to create the same human brain-type functionality with conventional technology, the resulting device would have to be enormous — loaded with multitudes of transistors that would require far more energy.

“Classical computers will always find an ineluctable limit to efficient brain-like computation in their very architecture,” said lead researcher Prezioso. “This memristor-based technology relies on a completely different way inspired by biological brain to carry on computation.”

To be able to approach functionality of the human brain, however, many more memristors would be required to build more complex neural networks to do the same kinds of things we can do with barely any effort and energy, such as identify different versions of the same thing or infer the presence or identity of an object not based on the object itself but on other things in a scene.

Potential applications already exist for this emerging technology, such as medical imaging, the improvement of navigation systems or even for searches based on images rather than on text. The energy-efficient compact circuitry the researchers are striving to create would also go a long way toward creating the kind of high-performance computers and memory storage devices users will continue to seek long after the proliferation of digital transistors predicted by Moore’s Law becomes too unwieldy for conventional electronics.

Here’s a link to and a citation for the paper,

Training and operation of an integrated neuromorphic network based on metal-oxide memristors by M. Prezioso, F. Merrikh-Bayat, B. D. Hoskins, G. C. Adam, K. K. Likharev,    & D. B. Strukov. Nature 521, 61–64 (07 May 2015) doi:10.1038/nature14441

This paper is behind a paywall but a free preview is available through ReadCube Access.

The third and last piece of research, which is from Rice University, hasn’t received any publicity yet, unusual given Rice’s very active communications/media department. Here’s a link to and a citation for their memristor paper,

2D materials: Memristor goes two-dimensional by Jiangtan Yuan & Jun Lou. Nature Nanotechnology 10, 389–390 (2015) doi:10.1038/nnano.2015.94 Published online 07 May 2015

This paper is behind a paywall but a free preview is available through ReadCube Access.

Dexter Johnson has written up the RMIT research (his May 14, 2015 post on the Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers] website). He linked it to research from Mark Hersam’s team at Northwestern University (my April 10, 2015 posting) on creating a three-terminal memristor enabling its use in complex electronics systems. Dexter strongly hints in his headline that these developments could lead to bionic brains.

For those who’d like more memristor information, this June 26, 2014 posting which brings together some developments at the University of Michigan and information about developments in the industrial sector is my suggestion for a starting point. Also, you may want to check out my material on HP Labs, especially prominent in the story due to the company’s 2008 ‘discovery’ of the memristor, described on a page in my Nanotech Mysteries wiki, and the controversy triggered by the company’s terminology (there’s more about the controversy in my April 7, 2010 interview with Forrest H Bennett III).

13,000 year old nanodiamonds and a scientific controversy about extinction

It took scientists from several countries and institutions seven years to reply to criticism of their 2007 theory which posited that a comet exploded over the earth’s surface roughly 13,000 years ago causing mass extinctions and leaving nano-sized diamonds in a layer of the earth’s crust. From a Sept. 1, 2014 news item on Azonano,

Tiny diamonds invisible to human eyes but confirmed by a powerful microscope at the University of Oregon are shining new light on the idea proposed in 2007 that a cosmic event — an exploding comet above North America — sparked catastrophic climate change 12,800 years ago.

In a paper appearing online ahead of print in the Journal of Geology, scientists from 21 universities in six countries report the definitive presence of nanodiamonds at some 32 sites in 11 countries on three continents in layers of darkened soil at the Earth’s Younger Dryas boundary.

The scientists have provided a map showing the areas with nanodiamonds,

The solid line defines the current known limits of the Younger Dryas Boundary field of cosmic-impact proxies, spanning 50 million square kilometers. [downloaded from http://www.news.ucsb.edu/2014/014368/nanodiamonds-are-forever#sthash.FtKG6WwS.dpuf]

The solid line defines the current known limits of the Younger Dryas Boundary field of cosmic-impact proxies, spanning 50 million square kilometers. [downloaded from http://www.news.ucsb.edu/2014/014368/nanodiamonds-are-forever#sthash.FtKG6WwS.dpuf]

 An Aug. 28,, 2014 University of Oregon news release, which originated the Azonano news item, describes the findings and the controversy,

The boundary layer is widespread, the researchers found. The miniscule diamonds, which often form during large impact events, are abundant along with cosmic impact spherules, high-temperature melt-glass, fullerenes, grape-like clusters of soot, charcoal, carbon spherules, glasslike carbon, heium-3, iridium, osmium, platinum, nickel and cobalt.

The combination of components is similar to that found in soils connected with the 1908 in-air explosion of a comet over Siberia and those found in the Cretaceous-Tertiary Boundary (KTB) layer that formed 65 million years ago when a comet or asteroid struck off Mexico and wiped out dinosaurs worldwide.

In the Oct. 9, 2007, issue of Proceedings of the National Academy of Sciences, a 26-member team from 16 institutions proposed that a cosmic impact event set off a 1,300-year-long cold spell known as the Younger Dryas. Prehistoric Clovis culture was fragmented, and widespread extinctions occurred across North America. Former UO researcher Douglas Kennett, a co-author of the new paper and now at Pennsylvania State University, was a member of the original study.

In that [2007] paper and in a series of subsequent studies, reports of nanodiamond-rich soils were documented at numerous sites. However, numerous critics refuted the findings, holding to a long-running theory that over-hunting sparked the extinctions and that the suspected nanodiamonds had been formed by wildfires, volcanism or occasional meteoritic debris, rather than a cosmic event.

The University of Oregon news release goes on to provide a rejoinder from a co-author of both the 2007 paper and the 2014 paper. as well as. a discussion of how the scientists gathered their evidence,

The glassy and metallic materials in the YDB layers would have formed at temperatures in excess of 2,200 degrees Celsius and could not have resulted from the alternative scenarios, said co-author James Kennett, professor emeritus at the University of California, Santa Barbara, in a news release. He also was on the team that originally proposed a comet-based event.

In the new paper, researchers slightly revised the date of the theorized cosmic event and cited six examples of independent research that have found consistent peaks in the creation of the nanodiamonds that match their hypothesis.

“The evidence presented in this paper rejects the alternate hypotheses and settles the debate about the existence of the nanodiamonds,” said the paper’s corresponding author Allen West of GeoScience Consulting of Dewey, Arizona. “We provide the first comprehensive review of the state of the debate and about YDB nanodiamonds deposited across three continents.”

West worked in close consultation with researchers at the various labs that conducted the independent testing, including with co-author Joshua J. Razink, operator and instrument manager since 2011 of the UO’s state-of-the-art high-resolution transmission electron microscope (HR-TEM) in the Center for Advanced Materials Characterization in Oregon (CAMCOR).

Razink was provided with samples previously cited in many of the earlier studies, as well as untested soil samples delivered from multiple new sites. The samples were placed onto grids and analyzed thoroughly, he said.

“These diamonds are incredibly small, on the order of a few nanometers and are invisible to the human eye and even to an optical microscope,” Razink said. “For reference, if you took a meter stick and cut it into one billion pieces, each of those pieces is one nanometer. The only way to really get definitive characterization that these are diamonds is to use tools like the transmission electron microscope. It helps us to rule out that the samples are not graphene or copper. Our findings say these samples are nanodiamonds.”

In addition to the HR-TEM done at the UO, researchers also used standard TEM, electron energy loss spectroscopy (EELS), energy-dispersive X-ray spectroscopy (EDS), selected area diffraction (SAD), fast Fourier transform (FFT) algorithms, and energy-filtered transmission electron microscopy (EFTEM).

“The chemical processing methods described in the paper,” Razink said, “lay out with great detail the methodology that one needs to go through in order to prepare their samples and identify these diamonds.”

The University of California at Santa Barbara (UCSB) produced an Aug. 28, 2014 news release about this work and while there is repetition, there is additional information such as a lede describing some of what was made extinct,

Most of North America’s megafauna — mastodons, short-faced bears, giant ground sloths, saber-toothed cats and American camels and horses — disappeared close to 13,000 years ago at the end of the Pleistocene period. …

An Aug. 27, 2014 news item (scroll down to the end) on ScienceDaily provides a link and a citation to the latest paper.

Disassembling silver nanoparticles

The notion of self-assembling material is a longstanding trope in the nanotechnology narrative. Scientists at the University of California at Santa Barbara (UCSB) have upended this trope with their work on disassembling the silver nanoparticles being used as a method of delivery for medications. From a June 9, 2014 news item on Azonano,

Scientists at UC Santa Barbara have designed a nanoparticle that has a couple of unique — and important — properties. Spherical in shape and silver in composition, it is encased in a shell coated with a peptide that enables it to target tumor cells. What’s more, the shell is etchable so those nanoparticles that don’t hit their target can be broken down and eliminated. The research findings appear today in the journal Nature Materials.

A june 8, 2014 UCSB news release (also on EurekAlert) by Julie Cohen, which originated the news item, explains the technology and this platform’s unique features in more detail,

The core of the nanoparticle employs a phenomenon called plasmonics. In plasmonics, nanostructured metals such as gold and silver resonate in light and concentrate the electromagnetic field near the surface. In this way, fluorescent dyes are enhanced, appearing about tenfold brighter than their natural state when no metal is present. When the core is etched, the enhancement goes away and the particle becomes dim.

UCSB’s Ruoslahti Research Laboratory also developed a simple etching technique using biocompatible chemicals to rapidly disassemble and remove the silver nanoparticles outside living cells. This method leaves only the intact nanoparticles for imaging or quantification, thus revealing which cells have been targeted and how much each cell internalized.

“The disassembly is an interesting concept for creating drugs that respond to a certain stimulus,” said Gary Braun, a postdoctoral associate in the Ruoslahti Lab in the Department of Molecular, Cellular and Developmental Biology (MCDB) and at Sanford-Burnham Medical Research Institute. “It also minimizes the off-target toxicity by breaking down the excess nanoparticles so they can then be cleared through the kidneys.”

This method for removing nanoparticles unable to penetrate target cells is unique. “By focusing on the nanoparticles that actually got into cells,” Braun said, “we can then understand which cells were targeted and study the tissue transport pathways in more detail.”

Some drugs are able to pass through the cell membrane on their own, but many drugs, especially RNA and DNA genetic drugs, are charged molecules that are blocked by the membrane. These drugs must be taken in through endocytosis, the process by which cells absorb molecules by engulfing them.

“This typically requires a nanoparticle carrier to protect the drug and carry it into the cell,” Braun said. “And that’s what we measured: the internalization of a carrier via endocytosis.”

Because the nanoparticle has a core shell structure, the researchers can vary its exterior coating and compare the efficiency of tumor targeting and internalization. Switching out the surface agent enables the targeting of different diseases — or organisms in the case of bacteria — through the use of different target receptors. According to Braun, this should turn into a way to optimize drug delivery where the core is a drug-containing vehicle.

“These new nanoparticles have some remarkable properties that have already proven useful as a tool in our work that relates to targeted drug delivery into tumors,” said Erkki Ruoslahti, adjunct distinguished professor in UCSB’s Center for Nanomedicine and MCDB department and at Sanford-Burnham Medical Research Institute. “They also have potential applications in combating infections. Dangerous infections caused by bacteria that are resistant to all antibiotics are getting more common, and new approaches to deal with this problem are desperately needed. Silver is a locally used antibacterial agent and our targeting technology may make it possible to use silver nanoparticles in treating infections anywhere in the body.”

Here’s a link to and a citation for the paper,

Etchable plasmonic nanoparticle probes to image and quantify cellular internalization by Gary B. Braun, Tomas Friman, Hong-Bo Pang, Alessia Pallaoro, Tatiana Hurtado de Mendoza, Anne-Mari A. Willmore, Venkata Ramana Kotamraju, Aman P. Mann, Zhi-Gang She, Kazuki N. Sugahara, Norbert O. Reich, Tambet Teesalu, & Erkki Ruoslahti. Nature Materials (2014) doi:10.1038/nmat3982 Published online 08 June 2014

This article is behind a paywall but there is a free preview via ReadCube Access.

Competition, collaboration, and a smaller budget: the US nano community responds

Before getting to the competition, collaboration, and budget mentioned in the head for this posting, I’m supplying some background information.

Within the context of a May 20, 2014 ‘National Nanotechnology Initiative’ hearing before the U.S. House of Representatives Subcommittee on Research and Technology, Committee on Science, Space, and Technology, the US General Accountability Office (GAO) presented a 22 pp. précis (PDF; titled: NANOMANUFACTURING AND U.S. COMPETITIVENESS; Challenges and Opportunities) of its 125 pp. (PDF version report titled: Nanomanufacturing: Emergence and Implications for U.S. Competitiveness, the Environment, and Human Health).

Having already commented on the full report itself in a Feb. 10, 2014 posting, I’m pointing you to Dexter Johnson’s May 21, 2014 post on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) where he discusses the précis from the perspective of someone who was consulted by the US GAO when they were writing the full report (Note: Links have been removed),

I was interviewed extensively by two GAO economists for the accompanying [full] report “Nanomanufacturing: Emergence and Implications for U.S. Competitiveness, the Environment, and Human Health,” where I shared background information on research I helped compile and write on global government funding of nanotechnology.

While I acknowledge that the experts who were consulted for this report are more likely the source for its views than I am, I was pleased to see the report reflect many of my own opinions. Most notable among these is bridging the funding gap in the middle stages of the manufacturing-innovation process, which is placed at the top of the report’s list of challenges.

While I am in agreement with much of the report’s findings, it suffers from a fundamental misconception in seeing nanotechnology’s development as a kind of race between countries. [emphases mine]

(I encourage you to read the full text of Dexter’s comments as he offers more than a simple comment about competition.)

Carrying on from this notion of a ‘nanotechnology race’, at least one publication focused on that aspect. From the May 20, 2014 article by Ryan Abbott for CourthouseNews.com,

Nanotech Could Keep U.S. Ahead of China

WASHINGTON (CN) – Four of the nation’s leading nanotechnology scientists told a U.S. House of Representatives panel Tuesday that a little tweaking could go a long way in keeping the United States ahead of China and others in the industry.

The hearing focused on the status of the National Nanotechnology Initiative, a federal program launched in 2001 for the advancement of nanotechnology.

As I noted earlier, the hearing was focused on the National Nanotechnology Initiative (NNI) and all of its efforts. It’s quite intriguing to see what gets emphasized in media reports and, in this case, the dearth of media reports.

I have one more tidbit, the testimony from Lloyd Whitman, Interim Director of the National Nanotechnology Coordination Office and Deputy Director of the Center for Nanoscale Science and Technology, National Institute of Standards and Technology. The testimony is in a May 21, 2014 news item on insurancenewsnet.com,

Testimony by Lloyd Whitman, Interim Director of the National Nanotechnology Coordination Office and Deputy Director of the Center for Nanoscale Science and Technology, National Institute of Standards and Technology

Chairman Bucshon, Ranking Member Lipinski, and Members of the Committee, it is my distinct privilege to be here with you today to discuss nanotechnology and the role of the National Nanotechnology Initiative in promoting its development for the benefit of the United States.

Highlights of the National Nanotechnology Initiative

Our current Federal research and development program in nanotechnology is strong. The NNI agencies continue to further the NNI’s goals of (1) advancing nanotechnology R&D, (2) fostering nanotechnology commercialization, (3) developing and maintaining the U.S. workforce and infrastructure, and (4) supporting the responsible and safe development of nanotechnology. …

,,,

The sustained, strategic Federal investment in nanotechnology R&D combined with strong private sector investments in the commercialization of nanotechnology-enabled products has made the United States the global leader in nanotechnology. The most recent (2012) NNAP report analyzed a wide variety of sources and metrics and concluded that “… in large part as a result of the NNI the United States is today… the global leader in this exciting and economically promising field of research and technological development.” n10 A recent report on nanomanufacturing by Congress’s own Government Accountability Office (GAO) arrived at a similar conclusion, again drawing on a wide variety of sources and stakeholder inputs. n11 As discussed in the GAO report, nanomanufacturing and commercialization are key to capturing the value of Federal R&D investments for the benefit of the U.S. economy. The United States leads the world by one important measure of commercial activity in nanotechnology: According to one estimate, n12 U.S. companies invested $4.1 billion in nanotechnology R&D in 2012, far more than investments by companies in any other country.  …

There’s cognitive dissonance at work here as Dexter notes in his own way,

… somewhat ironically, the [GAO] report suggests that one of the ways forward is more international cooperation, at least in the development of international standards. And in fact, one of the report’s key sources of information, Mihail Roco, has made it clear that international cooperation in nanotechnology research is the way forward.

It seems to me that much of the testimony and at least some of the anxiety about being left behind can be traced to a decreased 2015 budget allotment for nanotechnology (mentioned here in a March 31, 2014 posting [US National Nanotechnology Initiative’s 2015 budget request shows a decrease of $200M]).

One can also infer a certain anxiety from a recent presentation by Barbara Herr Harthorn, head of UCSB’s [University of California at Santa Barbara) Center for Nanotechnology in Society (CNS). She was at a February 2014 meeting of the Presidential Commission for the Study of Bioethical Issues (mentioned in parts one and two [the more substantive description of the meeting which also features a Canadian academic from the genomics community] of my recent series on “Brains, prostheses, nanotechnology, and human enhancement”). II noted in part five of the series what seems to be a shift towards brain research as a likely beneficiary of the public engagement work accomplished under NNI auspices and, in the case of the Canadian academic, the genomics effort.

The Americans are not the only ones feeling competitive as this tweet from Richard Jones, Pro-Vice Chancellor for Research and Innovation at Sheffield University (UK), physicist, and author of Soft Machines, suggests,

May 18

The UK has fewer than 1% of world patents on graphene, despite it being discovered here, according to the FT –

I recall reading a report a few years back which noted that experts in China were concerned about falling behind internationally in their research efforts. These anxieties are not new, CP Snow’s book and lecture The Two Cultures (1959) also referenced concerns in the UK about scientific progress and being left behind.

Competition/collaboration is an age-old conundrum and about as ancient as anxieties of being left behind. The question now is how are we all going to resolve these issues this time?

ETA May 28, 2014: The American Institute of Physics (AIP) has produced a summary of the May 20, 2014 hearing as part of their FYI: The AIP Bulletin of Science Policy News, May 27, 2014 (no. 93).

ETA Sept. 12, 2014: My first posting about the diminished budget allocation for the US NNI was this March 31, 2014 posting.

Regulators not prepared to manage nanotechnology risks according to survey

The focus of the survey mentioned in the heading is on the US regulatory situation regarding nanotechnology and, interestingly, much of the work was done by researchers at the University of British Columbia (UBC; Vancouver, Canada). A Dec. 19, 2013 news item on Nanowerk provides an overview,

In a survey of nanoscientists and engineers, nano-environmental health and safety scientists, and regulators, researchers at the UCSB Center for Nanotechnology in Society (CNS) and at the University of British Columbia found that those who perceive the risks posed by nanotechnology as “novel” are more likely to believe that regulators are unprepared. Representatives of regulatory bodies themselves felt most strongly that this was the case. “The people responsible for regulation are the most skeptical about their ability to regulate,” said CNS Director and co-author Barbara Herr Harthorn.

“The message is essentially,” said first author Christian Beaudrie of the Institute for Resources, Environment, and Sustainability at the University of British Columbia, “the more that risks are seen as new, the less trust survey respondents have in regulatory mechanisms. That is, regulators don’t have the tools to do the job adequately.”

The Dec. (?), 2013 University of California at Santa Barbara news release (also on EurekAlert), which originated the news item, adds this,

The authors also believe that when respondents suggested that more stakeholder groups need to share the responsibility of preparing for the potential consequences of nanotechnologies, this indicated a greater “perceived magnitude or complexity of the risk management challenge.” Therefore, they assert, not only are regulators unprepared, they need input from “a wide range of experts along the nanomaterial life cycle.” These include laboratory scientists, businesses, health and environmental groups (NGOs), and government agencies.

Here’s a link to and a citation for the paper,

Expert Views on Regulatory Preparedness for Managing the Risks of Nanotechnologies by Christian E. H. Beaudrie, Terre Satterfield, Milind Kandlikar, Barbara H. Harthorn. PLOS [Public Library of Science] ONE Published: November 11, 2013 DOI: 10.1371/journal.pone.0080250

All of the papers on PLOS ONE are open access.

I have taken a look at this paper and notice there will be a separate analysis of the Canadian scene produced at a later date. As for the US analysis, certainly this paper confirms any conjectures made based on my observations and intuitions about the situation given the expressed uneasiness from various groups and individuals about the regulatory situation.

I would have liked to have seen a critique of previous studies rather than a summary, as well as, a critique of the survey itself in its discussion/conclusion. I also would have liked to have seen an appendix with the survey questions listed in the order in which they were asked and seen qualitative research (one-on-one interviews) rather than 100% dependence on an email survey. That said, I was glad to see they reversed the meaning of some of the questions to doublecheck for someone who might indicate the same answers (e.g., 9 [very concerned]) throughout as a means of simplifying their participation,

Onward to the survey with an excerpt from the description of how it was conducted,

Subjects were contacted by email in a three-step process, including initial contact and two reminders at two-week intervals. Respondents received an ‘A’ or ‘B’ version of the survey at random, where the wording of several survey questions were modified to reverse the meaning of the question. Questions with alternate wording were reversed-coded during analysis to enable direct comparison of responses. Where appropriate the sequence of questions was also varied to minimize order effects.

Here’s how the researchers separated the experts into various groups (excerpted from the study),,

This study thus draws from a systematic sampling of US-based nano-scientists and engineers (NSE, n=114), nano-environmental health and safety scientists (NEHS, n=86), and regulatory decision makers and scientists (NREG, n=54), to characterize how well-prepared different experts think regulatory agencies are for the risk management of nanomaterials and applications. We tested the following hypothesis:

  1. (1) Expert views on whether US federal agencies are sufficiently prepared for managing any risks posed by nanotechnologies will differ significantly across classes of experts (NSE vs. NEHS. vs. NREG).

This difference across experts was anticipated and so tested in reference to four additional hypotheses:

  1. (2) Experts who see nanotechnologies as novel (i.e., as a new class of materials or objects) will view US federal regulatory agencies as unprepared for managing risks as compared to those who see nanotechnologies as not new (i.e., as little different from their bulk chemical form)
  2. (3) Experts who deem US federal regulatory agencies as less trustworthy will also view agencies as less prepared compared to those with more trust in agencies
  3. (4) Experts who attribute greater collective stakeholder responsibility (e.g. who view a range of stakeholders as equally responsible for managing risks) will see agencies as less prepared compared to those who attribute less responsibility.
  4. (5) Experts who are more socially and economically conservative will see regulatory agencies as more prepared compared to those with a more liberal orientation.

The researchers included Index Variables of trust, responsibility, conservatism, novelty-risks, and novelty-benefits in relationship to education, gender, field of expertise, etc. for a regression analysis. In the discussion (or conclusion), the authors had this to say (excerpted from the study),

Consistent differences exist between expert groups in their views on agency preparedness to manage nanotechnology risks, yet all three groups perceive regulatory agencies as unprepared. What is most striking however is that NREG experts see regulatory agencies as considerably less prepared than do their NSE or NEHS counterparts. Taking a closer look, the drivers of experts’ concerns over regulator preparedness tell a more nuanced story. After accounting for other differences, the ‘expert group’ classification per se does not drive the observed differences in preparedness perceptions. Rather a substantial portion of this difference results from differing assessments of the perceived novelty of risks across expert groups. Of the remaining variables, trust in regulators is a small but significant driver, and our findings suggest a link between concerns over the novelty of nanomaterials and the adequacy of regulatory design. Experts’ views on stakeholder responsibility are not particularly surprising since greater reliance on a collective responsibility model would need the burden to move away exclusively from regulatory bodies to other groups, and result presumptively in a reduced sense of preparedness.

Experts’ reliance in part upon socio-political values indicates that personal values also play a minor role in preparedness judgments.

I look forward to seeing the Canadian analysis. The paper is worth reading for some of the more subtle analysis I did not include here.

Nanotechnology analogies and policy

There’s a two part essay titled, Regulating Nanotechnology Via Analogy (part 1, Feb. 12, 2013 and part 2, Feb. 18, 2013), by Patrick McCray on his Leaping Robot blog that is well worth reading if you are interested in the impact analogies can have on policymaking.

Before launching into the analogies, here’s a bit about Patrick McCray from the Welcome page to his website, (Note: A link has been removed),

As a professor in the History Department of the University of California, Santa Barbara and a co-founder of the Center for Nanotechnology in Society, my work focuses on different technological and scientific communities and their interactions with the public and policy makers. For the past ten years or so, I’ve been especially interested in the historical development of so-called “emerging technologies,” whenever they emerged.

I hope you enjoy wandering around my web site. The section of it that changes most often is my Leaping Robot blog. I update this every few weeks or so with an extended reflection or essay about science and technology, past and future.

In part 1 (Feb. 12, 2013) of the essay, McCray states (Note: Links and footnotes have been removed),

[Blogger’s note: This post is adapted from a talk I gave in March 2012 at the annual Business History Conference; it draws on research done by Roger Eardley-Pryor, an almost-finished graduate student I’m advising at UCSB [University of California at Santa Barbara], and me. I’m posting it here with his permission. This is the first of a two-part essay…some of the images come from slides we put together for the talk.]

Over the last decade, a range of actors – scientists, policy makers, and activists – have used  historical analogies to suggest different ways that risks associated with nanotechnology – especially those concerned with potential environmental implications – might be minimized. Some of these analogies make sense…others, while perhaps effective, are based on a less than ideal reading of history.

Analogies have been used before as tools to evaluate new technologies. In 1965, NASA requested comparisons between the American railroad of the 19th century and the space program. In response, MIT historian Bruce Mazlish wrote a classic article that analyzed the utility and limitations of historical analogies. Analogies, he explained, function as both model and myth. Mythically, they offer meaning and emotional security through an original archetype of familiar knowledge. Analogies also furnish models for understanding by construing either a structural or a functional relationship. As such, analogies function as devices of anticipation which what today is fashionably called “anticipatory governance.”They also can serve as a useful tool for risk experts.

McCray goes on to cover some of the early discourse on nanotechnology, the players, and early analogies. While the focus is on the US, the discourse reflects many if not all of the concerns being expressed internationally.

In part 2 posted on Feb. 18, 2013 McCray mentions four of the main analogies used with regard to nanotechnology and risk (Note: Footnotes have been removed),

Example #1 – Genetically Modified Organisms

In April 2003, Prof. Vicki Colvin testified before Congress. A chemist at Rice University, Colvin also directed that school’s Center for Biological and Environmental Nanotechnology. This “emerging technology,” Colvin said, had a considerable “wow index.” However, Colvin warned, every promising new technology came with concerns that could drive it from “wow into yuck and ultimately into bankrupt.” To make her point, Colvin compared nanotech to recent experiences researchers and industry had experienced with genetically modified organisms. Colvin’s analogy – “wow to yuck” – made an effective sound bite. But it also conflated two very different histories of two specific emerging technologies.

While some lessons from GMOs are appropriate for controlling the development of nanotechnology, the analogy doesn’t prove watertight. Unlike GMOs, nanotechnology does not always involve biological materials. And genetic engineering in general, never enjoyed any sort of unalloyed “wow” period. There was “yuck” from the outset. Criticism accompanied GMOs from the very start. Furthermore, giant agribusiness firms prospered handsomely even after the public’s widespread negative reactions to their products.  Lastly, living organisms – especially those associated with food – designed for broad release into the environment were almost guaranteed to generate concerns and protests. Rhetorically, the GMO analogy was powerful…but a deeper analysis clearly suggests there were more differences than similarities.

McCray offers three more examples of analogies used to describe nanotechnology: asbestos, (radioactive) fallout, and Recombinant DNA which he dissects and concludes are not the best analogies to be using before offering this thought,

So — If historical analogies teach can teach us anything about the potential regulation of nano and other emerging technologies, they indicate the need to take a little risk in forming socially and politically constructed definitions of nano. These definitions should be based not just on science but rather mirror the complex and messy realm of research, policy, and application. No single analogy fits all cases but an ensemble of several (properly chosen, of course) can suggest possible regulatory options.

I recommend reading both parts of McCray’s essay in full. It’s a timely piece especially in light of a Feb. 28, 2013 article by Daniel Hurst for Australian website, theage.com.au, where a union leader raises health fears about nanotechnology by using the response to asbestos health concerns as the analogy,

Union leader Paul Howes has likened nanotechnology to asbestos, calling for more research to ease fears that the growing use of fine particles could endanger manufacturing workers.

”I don’t want to make the mistake that my predecessors made by not worrying about asbestos,” the Australian Workers Union secretary said.

I have covered the topic of carbon nanotubes and asbestos many times, one of the  latest being this Jan. 16, 2013 posting. Not all carbon nanotubes act like asbestos; the long carbon nanotubes present the problems.

Biosensing cocaine

Amusingly, the Feb. 13, 2013 news item on Nanowerk highlights the biosensing aspect of the work in its title,

New biosensing nanotechnology adopts natural mechanisms to detect molecules

(Nanowerk News) Since the beginning of time, living organisms have developed ingenious mechanisms to monitor their environment.

The Feb. 13, 2013 news release from the University of Montreal (Université de Montréal) takes a somewhat different tack by focusing on cocaine,

Detecting cocaine “naturally”

Since the beginning of time, living organisms have developed ingenious mechanisms to monitor their environment. As part of an international study, a team of researchers has adapted some of these natural mechanisms to detect specific molecules such as cocaine more accurately and quickly. Their work may greatly facilitate the rapid screening—less than five minutes—of many drugs, infectious diseases, and cancers.

Professor Alexis Vallée-Bélisle of the University of Montreal Department of Chemistry has worked with Professor Francesco Ricci of the University of Rome Tor Vergata and Professor Kevin W. Plaxco of the University of California at Santa Barbara to improve a new biosensing nanotechnology. The results of the study were recently published in the Journal of American Chemical Society (JACS).

The scientists have provided an interesting image to illustrate their work,

Artist's rendering: the research team used an existing cocaine biosensor (in green) and revised its design to react to a series of inhibitor molecules (in blue). They were able to adapt the biosensor to respond optimally even within a large concentration window. Courtesy: University of Montreal

Artist’s rendering: the research team used an existing cocaine biosensor (in green) and revised its design to react to a series of inhibitor molecules (in blue). They were able to adapt the biosensor to respond optimally even within a large concentration window. Courtesy: University of Montreal

The news release provides some insight into the current state of biosensing and what the research team was attempting to accomplish,

“Nature is a continuing source of inspiration for developing new technologies,” says Professor Francesco Ricci, senior author of the study. “Many scientists are currently working to develop biosensor technology to detect—directly in the bloodstream and in seconds—drug, disease, and cancer molecules.”

“The most recent rapid and easy-to-use biosensors developed by scientists to determine the levels of various molecules such as drugs and disease markers in the blood only do so when the molecule is present in a certain concentration, called the concentration window,” adds Professor Vallée-Bélisle. “Below or above this window, current biosensors lose much of their accuracy.”

To overcome this limitation, the international team looked at nature: “In cells, living organisms often use inhibitor or activator molecules to automatically program the sensitivity of their receptors (sensors), which are able to identify the precise amount of thousand of molecules in seconds,” explains Professor Vallée-Bélisle. “We therefore decided to adapt these inhibition, activation, and sequestration mechanisms to improve the efficiency of artificial biosensors.”

The researchers put their idea to the test by using an existing cocaine biosensor and revising its design so that it would respond to a series of inhibitor molecules. They were able to adapt the biosensor to respond optimally even with a large concentration window. “What is fascinating,” says Alessandro Porchetta, a doctoral student at the University of Rome, “is that we were successful in controlling the interactions of this system by mimicking mechanisms that occur naturally.”

“Besides the obvious applications in biosensor design, I think this work will pave the way for important applications related to the administration of cancer-targeting drugs, an area of increasing importance,” says Professor Kevin Plaxco. “The ability to accurately regulate biosensor or nanomachine’s activities will greatly increase their efficiency.”

The funders for this project are (from the news release),

… the Italian Ministry of Universities and Research (MIUR), the Bill & Melinda Gates Foundation Grand Challenges Explorations program, the European Commission Marie Curie Actions program, the U.S. National Institutes of Health, and the Fonds de recherche du Québec Nature et Technologies.

Here’s a citation and a link to the research paper,

Using Distal-Site Mutations and Allosteric Inhibition To Tune, Extend, and Narrow the Useful Dynamic Range of Aptamer-Based Sensors by Alessandro Porchetta, Alexis Vallée-Bélisle, Kevin W. Plaxco, and Francesco Ricci. J. Am. Chem. Soc., 2012, 134 (51), pp 20601–20604 DOI: 10.1021/ja310585e Publication Date (Web): December 6, 2012

Copyright © 2012 American Chemical Society

This article is behind a paywall.

One final note, Alexis Vallée-Bélisle has been mentioned here before in the context of a ‘Grand Challenges Canada programme’ (not the Bill and Melinda Gates ‘Grand Challenges’) announcement of several fundees  in my Nov. 22, 2012 posting. That funding appears to be for a difference project.

Teaching physics visually

Art/science news  is usually about a scientist using their own art or collaborating with an artist to produce pieces that engage the public. This particular May 23, 2012 news item by Andrea Estrada on the physorg.com website offers a contrast when it highlights a teaching technique integrating visual arts with physics for physics students,

Based on research she conducted for her doctoral dissertation several years ago, Jatila van der Veen, a lecturer in the College of Creative Studies at UC [University of  California] Santa Barbara and a research associate in UC Santa Barbara’s physics department, created a new approach to introductory physics, which she calls “Noether before Newton.” Noether refers to the early 20th-century German mathematician Emmy Noether, who was known for her groundbreaking contributions to abstract algebra and theoretical physics.

Using arts-based teaching strategies, van der Veen has fashioned her course into a portal through which students not otherwise inclined might take the leap into the sciences — particularly physics and mathematics. Her research appears in the current issue of the American Educational Research Journal, in a paper titled “Draw Your Physics Homework? Art as a Path to Understanding in Physics Teaching.”

The May 22, 2012 press release on the UC Santa Barbara website provides this detail about van der Veen’s course,

While traditional introductory physics courses focus on 17th-century Newtonian mechanics, van der Veen takes a contemporary approach. “I start with symmetry and contemporary physics,” she said. “Symmetry is the underlying mathematical principle of all physics, so this allows for several different branches of inclusion, of accessibility.”

Much of van der Veen’s course is based on the principles of “aesthetic education,” an approach to teaching formulated by the educational philosopher Maxine Greene. Greene founded the Lincoln Center Institute, a joint effort of Teachers College, Columbia University, and Lincoln Center. Van der Veen is quick to point out, however, that concepts of physics are at the core of her course. “It’s not simply looking at art that’s involved in physics, or looking at beautiful pictures of galaxies, or making fractal art,” she said. “It’s using the learning modes that are available in the arts and applying them to math and physics.”

Taking a visual approach to the study of physics is not all that far-fetched. “If you read some of Albert Einstein’s writings, you’ll see they’re very visual,” van der Veen said. “And in some of his writings, he talks about how visualization played an important part in the development of his theories.”

Van der Veen has taught her introductory physics course for five years, and over that time has collected data from one particular homework assignment she gives her students: She asks them to read an article by Einstein on the nature of science, and then draw their understanding of it. “I found over the years that no one ever produced the same drawing from the same article,” she said. “I also found that some students think very concretely in words, some think concretely in symbols, some think allegorically, and some think metaphorically.”

Adopting arts-based teaching strategies does not make van der Veen’s course any less rigorous than traditional introductory courses in terms of the abstract concepts students are required to master. It creates a different, more inclusive way of achieving the same end.

I went to look at van der Veen’s webpage on the UC Santa Barbara website to find a link to this latest article (open access) of hers and some of her other projects. I have taken a brief look at the Draw your physics homework? article (tir is 53 pp.) and found these images on p. 29 (PDF) illustrating her approach,

Figure 5. Abstract-representational drawings. 5a (left): female math major, first year; 5b (right): male math major, third year. Used with permission. (downloaded from the American Educational Research Journal, vol. 49, April 2012)

Van der Veen offers some context on the page preceding the image, p. 28,

Two other examples of abstract-representational drawings are shown in Figure 5. I do not have written descriptions, but in each case I determined that each student understood the article by means of verbal explanation. Figure 5a was drawn by a first-year math major, female, in 2010. She explained the meaning of her drawing as representing Einstein’s layers from sensory input (shaded ball at the bottom), to secondary layer of concepts, represented by the two open circles, and finally up to the third level, which explains everything below with a unified theory. The dashes surrounding the perimeter, she told me, represent the limit of our present knowledge. Figure 5b was drawn by a third-year male math major. He explained that the brick-like objects in the foreground are sensory perceptions, and the shaded portion in the center of the drawing, which appears behind the bricks, is the theoretical explanation which unifies all the experiences.

I find the reference to Einstein and visualization compelling in light of the increased interest (as I perceive it) in visualization currently occurring in the sciences.