Monthly Archives: September 2012

Memristors and transparent electronics in Oregon

The Sept. 14, 2012 news release from Oregon State University (OSU) features some very careful wording around the concept of a memristor.  First, here’s the big picture news,

The transparent electronics that were pioneered at Oregon State University may find one of their newest applications as a next-generation replacement for some uses of non-volatile flash memory, a multi-billion dollar technology nearing its limit of small size and information storage capacity.

Researchers at OSU have confirmed that zinc tin oxide, an inexpensive and environmentally benign compound, has significant potential for use in this field, and could provide a new, transparent technology where computer memory is based on resistance, instead of an electron charge.

Here’s where it starts to get interesting,

This resistive random access memory, or RRAM, is referred to by some researchers as a “memristor.”  [emphasis mine] Products using this approach could become even smaller, faster and cheaper than the silicon transistors that have revolutionized modern electronics – and transparent as well.

Transparent electronics offer potential for innovative products that don’t yet exist, like information displayed on an automobile windshield, or surfing the web on the glass top of a coffee table.

“Flash memory has taken us a long way with its very small size and low price,” said John Conley, a professor in the OSU School of Electrical Engineering and Computer Science. “But it’s nearing the end of its potential, and memristors are a leading candidate to continue performance improvements.”

Memristors have a simple structure, are able to program and erase information rapidly, and consume little power. They accomplish a function similar to transistor-based flash memory, but with a different approach. Whereas traditional flash memory stores information with an electrical charge, RRAM accomplishes this with electrical resistance. Like flash, it can store information as long as it’s needed.

Flash memory computer chips are ubiquitous in almost all modern electronic products, ranging from cell phones and computers to video games and flat panel televisions.

I like how they note that some scientists call these devices memristors thereby sidestepping at least some of the controversy as to what exactly constitute a memristor (my latest piece which mentions a critique of the memristor concept was posted Sept. 6, 2012).

The news release gets a little confusing here,

Some of the best opportunities for these new amorphous oxide semiconductors are not so much for memory chips, but with thin-film, flat panel displays, researchers say. [emphasis mine] Private industry has already shown considerable interest in using them for the thin-film transistors that control liquid crystal displays, and one compound approaching commercialization is indium gallium zinc oxide.

But indium and gallium are getting increasingly expensive, and zinc tin oxide – also a transparent compound – appears to offer good performance with lower cost materials. The new research also shows that zinc tin oxide can be used not only for thin-film transistors, but also for memristive memory, Conley said, an important factor in its commercial application.

More work is needed to understand the basic physics and electrical properties of the new compounds, researchers said.

There was no mention of amorphous oxide semiconductors until the portion I’ve highlighted . If I’ve understood what follows correctly, there’s a new class of semiconductor for use in thin film applications (transparent electronics): an amorphous oxide semiconductor and the most promising material for commercial purposes is indium gallium zinc oxide. The other oxide mentioned in the excerpt, zinc tin oxide, can be used both for thin film applications and memristive applications.

This memristor story has certainly moved some interesting directions as it continues to develop.

Hands off the bubbles in my boiling water!

The discovery that boiling water bubbled was important to me. I’ve never really thought about it until now when researchers at Northwestern University have threatened to take my bubbles away, metaphorically speaking. From the Sept. 13, 2012 news item on ScienceDaily,

Every cook knows that boiling water bubbles, right? New research from Northwestern University turns that notion on its head.

“We manipulated what has been known for a long, long time by using the right kind of texture and chemistry to prevent bubbling during boiling,” said Neelesh A. Patankar, professor of mechanical engineering at Northwestern’s McCormick School of Engineering and Applied Science and co-author of the study.

This discovery could help reduce damage to surfaces, prevent bubbling explosions and may someday be used to enhance heat transfer equipment, reduce drag on ships and lead to anti-frost technologies.

The Sept. 13, 2012 news release from McCormick University (which originated the news item) provides details,

This phenomenon is based on the Leidenfrost effect. In 1756 the German scientist Johann Leidenfrost observed that water drops skittered on a sufficiently hot skillet, bouncing across the surface of the skillet on a vapor cushion or film of steam. The vapor film collapses as the surface falls below the Leidenfrost temperature. When the water droplet hits the surface of the skillet, at 100 degrees Celsius, boiling temperature, it bubbles.

To stabilize a Leidenfrost vapor film and prevent bubbling during boiling, Patankar collaborated with Ivan U. Vakarelski of King Abdullah University of Science and Technology, Saudi Arabia. Vakarelski led the experiments and Patankar provided the theory. The collaboration also included Derek Chan, professor of mathematics and statistics from the University of Melbourne in Australia.

In their experiments, the stabilization of the Leidenfrost vapor film was achieved by making the surface of tiny steel spheres very water-repellant. The spheres were sprayed with a commercially available hydrophobic coating — essentially self-assembled nanoparticles — combined with other water-hating chemicals to achieve the right amount of roughness and water repellency. At the correct length scale this coating created a surface texture full of tiny peaks and valleys.

When the steel spheres were heated to 400 degrees Celsius and dropped into room temperature water, water vapors formed in the valleys of the textured surface, creating a stable Leidenfrost vapor film that did not collapse once the spheres cooled to the temperature of boiling water. In the experiments, researchers completely avoided the bubbly phase of boiling.

To contrast, the team also coated tiny steel spheres with a water-loving coating, heated the objects to 700 degrees Celsius, dropped them into room temperature water and observed that the Leidenfrost vapor collapsed with a vigorous release of bubbles.

The scientists have provided a video illustrating their work,

This movie shows the cooling of 20 mm hydrophilic (left) and superhydrophobic (right) steel spheres in 100 C water. The spheres’ initial temperature is about 380 C. The bubbling phase of boiling is completely eliminated for steel spheres with superhydrophobic coating. (from Vimeo, http://vimeo.com/49391913)

I understand there are advantages to not having bubbles in hot water but it somehow seems wrong. I’ve given up a lot over the years: gravity, boundaries between living and non-living (that was a very big thing to give up), and other distinctions that I have made based on traditional science but, today, this is one step too far.

It may seem silly but that memory of my mother explaining that you identify boiling water by its bubbles is important to me. It was one of my first science lessons. I imagine I will recover from this moment but it does remind me of how challenging it can be when your notions of reality/normalcy are challenged by various scientific endeavours. The process can get quite exhausting as you keep recalibrating everything you ‘know’ all the time.

Quantum-Nano Centre (QNC) opening Sept. 21, 2012 at the University of Waterloo (Canada)

Gary Thomas’ Sept. 13, 2012 news item for Azonano provides some facts about the new centre at the University of Waterloo,

The University of Waterloo has reported that the Mike & Ophelia Lazaridis Quantum-Nano Centre (QNC) will be officially opened on September 21, 2012, in the new building at the center of the university campus.

The Waterloo Institute for Nanotechnology (WIN) and the Institute for Quantum Computing (IQC) will share QNC, a 285,000-square-foot facility for future innovation in nanotechnology and quantum information. QNC will provide the equipment and collaborative opportunities to researchers to carry out pioneering experiments, explore new materials and processes and develop advanced technologies.

You can find more details on Azonano or in the fulsome Sept. 11, 2012 University of Waterloo news release by Christian Aagaard,

It’s a curious building for curious people, supported by an entrepreneur driven by curiosity.

The Mike & Ophelia Lazaridis Quantum-Nano Centre on the main campus of the University of Waterloo is ready for its starring role — a gateway to a future shaped by incredibly small devices, advanced materials and powerful technologies based on the laws of quantum mechanics.

Magnetically cleaning up oil spills

Researchers at the Massachusetts Institute of Technology (MIT) have developed a promising technique for cleaning up oil spills, using magnets, which is more efficient and more environmentally friendly.

ETA Sept. 14, 2012: For some reason the embedded video keeps disappearing, so here’s the link: http://youtu.be/ZaP7XOjsCHQ

The Sept. 12, 2012 news item on Nanowerk notes,

The researchers will present their work at the International Conference on Magnetic Fluids in January. Shahriar Khushrushahi, a postdoc in MIT’s Department of Electrical Engineering and Computer Science, is lead author on the paper, joined by Markus Zahn, the Thomas and Gerd Perkins Professor of Electrical Engineering, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering. The team has also filed two patents on its work.

In the MIT researchers’ scheme, water-repellent ferrous nanoparticles would be mixed with the oil, which could then be separated from the water using magnets. The researchers envision that the process would take place aboard an oil-recovery vessel, to prevent the nanoparticles from contaminating the environment. Afterward, the nanoparticles could be magnetically removed from the oil and reused.

Larry Hardesty’s Sept. 12, 2012 MIT news release , which originated the news item, provides detail about the standard technique for  using magnetic nanoparticles and the new technique,

According to Zahn, there’s a good deal of previous research on separating water and so-called ferrofluids — fluids with magnetic nanoparticles suspended in them. Typically, these involve pumping a water-and-ferrofluid mixture through a channel, while magnets outside the channel direct the flow of the ferrofluid, perhaps diverting it down a side channel or pulling it through a perforated wall.

This approach can work if the concentration of the ferrofluid is known in advance and remains constant. But in water contaminated by an oil spill, the concentration can vary widely. Suppose that the separation system consists of a branching channel with magnets along one side. If the oil concentration were zero, the water would naturally flow down both branches. By the same token, if the oil concentration is low, a lot of the water will end up flowing down the branch intended for the oil; if the oil concentration is high, a lot of the oil will end up flowing down the branch intended for the water.


The MIT researchers vary the conventional approach in two major ways: They orient their magnets perpendicularly to the flow of the stream, not parallel to it; and they immerse the magnets in the stream, rather than positioning them outside of it.

The magnets are permanent magnets, and they’re cylindrical. Because a magnet’s magnetic field is strongest at its edges, the tips of each cylinder attract the oil much more powerfully than its sides do. In experiments the MIT researchers conducted in the lab, the bottoms of the magnets were embedded in the base of a reservoir that contained a mixture of water and magnetic oil; consequently, oil couldn’t collect around them. The tops of the magnets were above water level, and the oil shot up the sides of the magnets, forming beaded spheres around the magnets’ ends.

The design is simple, but it provides excellent separation between oil and water. Moreover, Khushrushahi says, simplicity is an advantage in a system that needs to be manufactured on a large scale and deployed at sea for days or weeks, where electrical power is scarce and maintenance facilities limited

. …

In their experiments, the MIT researchers used a special configuration of magnets, called a Halbach array, to extract the oil from the tops of the cylindrical magnets. When attached to the cylinders, the Halbach array looks kind of like a model-train boxcar mounted on pilings. The magnets in a Halbach array are arranged so that on one side of the array, the magnetic field is close to zero, but on the other side, it’s roughly doubled. In the researchers’ experiments, the oil in the reservoir wasn’t attracted to the bottom of the array, but the top of the array pulled the oil off of the cylindrical magnets.

While this work is promising, there are still a lot of issues to be addressed including how water will be removed from the recovered oil (oil and water can mix to some degree depending on their relative densities).

Lumerical’s latest INTERCONNECT product and statistic variations in one or more circuit elements

Vancouver- (Canada) based Lumerical Solutions’ Sept. 12, 2012 (?) product announcement for its INTERCONNECT 2.0 release notes some improvements and new features,

Release 2.0 of INTERCONNECT enables PIC designers to more quickly explore the role of circuit architecture and statistical component variations on overall circuit performance.  New features include an improved frequency-domain calculation engine which can compute circuit performance significantly faster, a custom s-parameter element which can accept measured or simulated data of arbitrary complexity including complete characterization data for multimode, many-port elements, and a yield calculator that produces Monte Carlo performance estimates based on statistical variations of one or more circuit parameters.

As the company seems to do on a regular basis, they are offering a free 30-day evaluation period for the product.

The Sept. 12, 2012 product announcement offers some insight into which users might find this product most useful along with some testimonials from the product’s current users,

INTERCONNECT has been engineered, since the original product concept, to support both device and circuit designers.  Device designers are interested in component dimensions and material compositions, often with the goal of designing new proprietary circuit elements that work well with adjacent components.  Circuit designers are focused on achieving desired target performance and are often only interested in using element-level transfer functions and compact models to predict system behavior. INTERCONNECT 2.0’s yield calculator, which accepts statistical variations at the element level whether they apply to physical or phenomenological parameters, continues to support both designer profiles.

Professor Lukas Chrostowski of the University of British Columbia, and Director of the NSERC [Natural Sciences and Engineering Research Council] CREATE Si-EPIC training program, believes that device designers will benefit from INTERCONNECT’s integration with MODE Solutions and FDTD Solutions.  “The software can be used to design devices such as ring resonators, waveguide Bragg gratings, arrayed waveguide gratings, and fibre grating couplers, and to study the performance of components within simple circuits,” he said.  “For example, reflections from components such as grating couplers often introduce undesired ripple in the optical spectrum, and this can be simulated using INTERCONNECT.”

As photonic integrated circuits are complex and require multi-physics simulation, the ability to create hierarchically-defined elements from single devices like a modulator to entire transmitter subsystems is very important.  Being able to experimentally verify these devices and subsystems and incorporate that data into a single design environment together with statistical variations at every level of the design hierarchy promises to streamline the design process.

“In response to ongoing requests for a framework that goes beyond idealized representations, INTERCONNECT 2.0 can incorporate statistical variations of geometrical or compact-model parameters,” according to Dr. Jackson Klein, Senior Product Manager of INTERCONNECT. “Together with INTERCONNECT’s hierarchical model definition, proprietary component-level IP can be easily incorporated into more sophisticated circuit models of arbitrary complexity.”

INTERCONNECT’s ability to model multimode, many-port circuits of arbitrary complexity and physical sophistication means it will play a critical role as designers explore circuit designs incorporating proprietary elements and ever-increasing component count.  “We look forward to our ongoing discussions with industry and foundry representatives, public and private companies, and government laboratories to refine INTERCONNECT’s capabilities so that it can best serve the emerging needs of the photonic integrated circuit design community,” says Dr. James Pond, Lumerical’s Chief Technology Officer.

University of Delaware Professor and Director of OpSIS Michael Hochberg has extensive experience working with Lumerical.  “We’re very happy with their tools and investment in the INTERCONNECT product,” he said.  “At OpSIS, our goal is to provide to anyone in the world with advanced silicon photonics processes for their own projects, while only paying for the wafer area that they use.  Doing schematic-driven design is really critical for making complex photonic circuits, and to make it easy for our users to lay out and simulate systems-on-chip we are now working with Lumerical to integrate OpSIS device libraries with their tools.”

The company has been quite active lately, the last product announcement was mentioned in my July 6, 2012 posting about Lumerical’s FDTD solution.

Electricity without a current

My imagination fails at the thought of electricity without a current luckily there’s a consortium of scientists at Finland’s Tampere University of  Technology (TUT) who have no trouble with their imaginations, according to the Sept. 12, 2012 news item on Nanowerk (Note: I have removed a link from the following excerpt),

The Academy of Finland has granted €1.6 million to a consortium based at Tampere University of Technology (TUT) under the “Programmable Materials” funding scheme. The project runs from 1 September 2012 to 31 August 2016 and is entitled “Photonically Addressed Zero Current Logic through Nano-Assembly of Functionalised Nanoparticles to Quantum Dot Cellular Automata” ( PhotonicQCA).

The Sept. 12, 2012 news release from TUT which originated the news item explains the ideas and work which support the notion of electricity without current,

The key idea behind the project is the so-called quantum dot cellular automaton (QCA). In QCAs, pieces of semiconductor so small that single electronic charges can be measured and manipulated are arranged into domino like cells. Like dominos, these cells can be arranged so that the position of the charges in one cell affects the position of the charges in the next cell, which allows making logical circuits out of these “quantum dominos”. But, no charge flows from one cell to the next, i.e. no current. This, plus the extremely small size of QCAs, means that they could be used to make electronic circuits at densities and speeds not possible now. However, realisation of the dots and cells and making electrical connections to them has been a huge challenge.

Professors Donald Lupo from Department of Electronics, Mircea Guina and Tapio Niemi from Optoelectronics Research Centre (ORC), and Nikolai Tkachenko and Helge Lemmetyinen from Department of Chemistry and Bioengineering, want to investigate a completely new approach. They want to attach tailor-made molecules, optical nanoantennas, to the quantum dots, which can inject a charge into a dot or enable charge transfer between the dots when light of the right wavelength shines on them. This concept will be combined with the expertise at TUT’s Optoelectronics Research Centre concerning “site-specific epitaxy”, i.e. growing the quantum dots in the right place using nanofabrication techniques, which would enable a solid-state technology platform compatible with standard electronic circuits. If this works, then someday QCAs could be written and read with light.

Project coordinator, Professor Donald Lupo says: “As far as we can tell, no one has ever tried anything like this before. It’s a completely new idea. It was our excellent inter-departmental communication that identified a unique combination of know-how that let us come up with this concept. It’s highly risky because of many technological challenges, but the potential is amazing; being able to get rid of electrical connections and write and read nanoelectronic circuits using only light would be a huge breakthrough”.

Reading the Programmable Materials page on the Academy of Finland website provided some clues for what they hope to achieve with this ‘electricity’ project is all about,

The FinnSight 2015 report published in 2006 underscores the fact that materials research is a cross-disciplinary exercise: new materials are increasingly being developed on a multidisciplinary platform. The report also urges Finnish materials research to step up its efforts to explore the more advanced properties of new materials that are still partly unknown.

Most new materials today are typically static by nature. They are composed of components that have a specific function or quality, but they do not respond to their environment as such. Programmable materials, by contrast, are composed of components that respond in a specific, programmed way to environmental stimuli and signals. Depending on the initial state or code of these components, it is possible to produce various complex, even macroscopic, structures in a controlled way.

Programmable materials represent a new emerging research field in which Finland can play a pioneering role. The programmable properties of different materials are continuing to develop with advances in such fields as nano- and biotechnology, and programmable materials may completely revolutionise applications of functional materials.

Materials programming is an emerging, all-new field of research. The aim of this programme is to work with the best international research teams and solidify Finland’s position at the international forefront of research. The strongest countries in this field include the United States, Japan, Russia, India and certain European countries. In addition, China has a strong emerging materials research field.

I threw in that last paragraph because I find their analysis of the international scene quite interesting and notice they list three of the BRICS (Brazil, Russia, India, China, ans South Africa) countries as leaders in this emergent field.

Getting back to this specific ‘electricity’ project, it sounds as if they’re working on an electrical component which could be made to operate when a light is shone on it in a process that reminiscent of photosynthesis (Wikipedia essay on photosynthesis) where a plant converts light into chemical energy.

Sunscreen from coral

It’s a fascinating project they’re working on at King’s College London (KCL), converting an amino acid found in coral into a sunscreen for humans. The researchers have just signed an agreement to work with skincare company, Aethic but the  research was first discussed when it was still at the laboratory stage in an Aug. 2011 video produced by KCL,

The Sept. 12, 2012 news item on physorg.com makes the latest announcement about the project,

King’s College London has entered into an agreement with skincare company Aethic to develop the first sunscreen based on MAA’s (mycosporine-like amino acids), produced by coral.

It was last year that a team led by Dr Paul Long at King’s discovered how the naturally-occurring MAA’s were produced. Algae living within coral make a compound that is transported to the coral, which then modifies it into a sunscreen for the benefit of both the coral and the algae. Not only does this protect them both from UV damage, but fish that feed on the coral also benefit from this sunscreen protection.

The KCL Sept. 11, 2012 news release (which originated the new item) notes,

The next phase of development is for the researchers to work with Professor Antony Young and colleagues at the St John’s Institute of Dermatology at King’s, to test the efficacy of the compounds using human skin models.

Aethic’s Sôvée sunscreen was selected as the best ‘host’ product for the compound because of its existing broad-spectrum UVA/UVB and photo-stability characteristics and scientifically proven ecocompatibility credentials.

Dr Paul Long, Reader in Pharmacognosy at King’s Institute of Pharmaceutical Science, said: “While MAA’s have a number of other potential applications, human sunscreen is certainly a good place to begin proving the compound’s features. If our further studies confirm the results we are expecting, we hope that we will be able to develop a sunscreen with the broadest spectrum of protection.  Aethic has the best product and philosophy with which to proceed this exciting project.” [emphasis mine]

I went to the Aethic website and found this on the Be Aethic page,

Being Aethic means you are one with nature through our products. It means your skin lives better, feels better and looks better.

It means you do too.

Your skin is your largest organ. It’s worth looking after from within, with a good diet, and from the outside by protecting it from daily life and the sun’s harmful rays, by keeping it nourished.

Aethic Sôvée has the most photostable sun filters – anywhere. It has organic moisturisers. It contains a skin anti-oxidant. We developed this formula to treat your skin like royalty. And nature will love you for it as well.

People have been telling us that doing less damage to your skin and the ocean are amazing things to do together

Be loved by nature even more – share this with your friends. The more people you tell, the bigger the difference you make. Here’s why.

Deep down, most people probably suspected that the many ingredients they put on their skin from other sunscreens, must do some harm somewhere. Sure enough, in 2008 it was proven by Prof Roberto Danovaro, from Marche Polytechnic University in Italy, that these products can seriously damage coral. He has since discovered they do damage to clams too.

When you use Aethic Sôvée, you know that you’re leaving nothing behind to harm the ocean. In fact, with your contribution to The Going Blue Foundation’s coral nursery fund, you are going positive. Marine Positive – the certification Aethic Sôvée has received.

Unfortunately this copy is a bit of heavy on the sanctimonious side but the possibility of minimizing one’s negative impact on the  world’s oceans while preventing damage to skin can’t be ignored.

In any event, I found the information about the sunscreen making its way up the food chain and benefitting predators amused me when I considered the possibility of a bear or cougar benefitting should they happen to eat me while I’m using this new sunscreen. Given that this solution is not based on metal oxides perhaps it will find more favour with the ‘anti-nanosunscreen’ crowd.

NERCS—a great nano acronym (Nanosystems ERCs)—engineering research centers

It’s a bit complicated, isn’t it? Here’s the straight scoop from the Sept. 11, 2012 news item on Nanowerk,

The National Science Foundation (NSF) recently awarded $55.5 million to university consortia to establish three new Engineering Research Centers (ERCs) that will advance interdisciplinary nanosystems research and education in partnership with industry.

Over the next five years, these Nanosystems ERCs, or NERCS, will advance knowledge and create innovations that address significant societal issues, such as the human health and environmental implications of nanotechnology. At the same time, they will advance the competitiveness of U.S. industry. The centers will support research and innovation in electromagnetic systems, mobile computing and energy technologies, nanomanufacturing, and health and environmental sensing.

“The Nanosystems ERCs will build on more than a decade of investment and discoveries in fundamental nanoscale science and engineering,” said Thomas Peterson, NSF’s assistant director for engineering. “Our understanding of nanoscale phenomena, materials and devices has progressed to a point where we can make significant strides in nanoscale components, systems and manufacturing.”

Here are some specifics about the three new centers (from the news item),

The NSF Nanosystems Engineering Research Center for Advanced Self-Powered Systems of Integrated Sensors and Technology (ASSIST), led by North Carolina State University, will create self-powered wearable systems that simultaneously monitor a person’s environment and health, in search of connections between exposure to pollutants and chronic diseases.

The NSF Nanosystems Engineering Research Center for Nanomanufacturing Systems for Mobile Computing and Mobile Energy Technologies (NASCENT), led by the University of Texas at Austin, will pursue high-throughput, reliable, and versatile nanomanufacturing process systems, and will demonstrate them through the manufacture of mobile nanodevices.

The NSF Nanosystems Engineering Research Center for Translational Applications of Nanoscale Multiferroic Systems (TANMS), led by the University of California Los Angeles, will seek to reduce the size and increase the efficiency of components and systems whose functions rely on the manipulation of either magnetic or electromagnetic fields.

The NERCs will be a part of NSF’s contributions to the National Nanotechnology Initiative, which is a government-wide activity designed to ensure that investments in this area are made in a coordinated and timely manner and to accelerate the pace of revolutionary nanotechnology discoveries. A long-term view for nanotechnology research and education needs is documented in the 2010 NSF/WTEC report, “Nanotechnology Research Directions for Societal Needs in 2020”.

You can find the 614 pp. “Nanotechnology Research Directions for Societal Needs in 2020” PDF written in 2010 here.