Monthly Archives: July 2014

Nanotechnology Standards Database updated by American National Standards Institute

The Nanotechnology Standards Database announced in July 2013 has been updated according to a July 10, 2014 announcement. A brief review of the original project seems in order before any discussion of the updates. From my July 30, 2013 posting,

The ANSI-NSP (American National Standards Institute Nanotechnology Standards Panel) Nanotechnology Standards Database announced in a July 30, 2013 news item on Nanowerk is in the early stages,

The American National Standards Institute Nanotechnology Standards Panel (ANSI-NSP) is pleased to announce the launch of a new database compiling information about nanotechnology-related standards and affiliated activities. The creation of the database, which was first discussed during a February 2013 meeting of the ANSI-NSP in Washington, DC, is part of a larger ongoing effort by the ANSI-NSP and its members and partners to bolster the visibility of existing and in-development nanomaterials and nanotechnology guidance documents, reference materials, and standards.

Note: this NSP Standards Database is not for uploading or collecting the standards or documents themselves, but rather seeking relevant information regarding such documents. If you wish to provide a link for users to access specific documents or standards, a form field has been provided.

Here’s what ANSI-NSP has done to update its Nanotechnology Standards Database this year. From the July 10, 2014 ANSI-NSP announcement,

The new updates to the database include the creation of a single data entry form designed to allow standards developers and other organizations to more easily enter information. This change allows for a straightforward transition for those documents included in the database that change status from unpublished to published. In addition, the database has added a government-focused section, allowing representatives of governmental bodies to post policy and position documents that could be of interest to the greater nanotechnology community.

The announcement also includes a call for more submissions,

To continue growing the database and optimize it for the needs of the user community, ANSI-NSP encourages SDOs (standards developing organizations), government bodies, and other relevant organizations to contribute information about their current and in-progress documents and standards. Organizations are required to register for free on the database site before submitting their information, to ensure relevancy and accuracy. The database includes information from a wide range of organizations from around the world that develop standards and other similar documents, and is accessible to a global audience of individuals and groups interested in learning more about nanotechnology standardization.

You can follow links to the database and other relevant sites from the Nanotechnology Standards Database landing page on the ANSI website.

Nanophotonics transforms Raman spectroscopy at Rice University (US)

This new technique for sensing molecules is intriguing. From a July 15, 2014 news item on Azonano,

Nanophotonics experts at Rice University [Texas, US] have created a unique sensor that amplifies the optical signature of molecules by about 100 billion times. Newly published tests found the device could accurately identify the composition and structure of individual molecules containing fewer than 20 atoms.

The new imaging method, which is described this week in the journal Nature Communications, uses a form of Raman spectroscopy in combination with an intricate but mass reproducible optical amplifier. Researchers at Rice’s Laboratory for Nanophotonics (LANP) said the single-molecule sensor is about 10 times more powerful that previously reported devices.

A July 15, 2014 Rice University news release (also on EurekAlert), which originated the news item, provides more detail about the research,

“Ours and other research groups have been designing single-molecule sensors for several years, but this new approach offers advantages over any previously reported method,” said LANP Director Naomi Halas, the lead scientist on the study. “The ideal single-molecule sensor would be able to identify an unknown molecule — even a very small one — without any prior information about that molecule’s structure or composition. That’s not possible with current technology, but this new technique has that potential.”

The optical sensor uses Raman spectroscopy, a technique pioneered in the 1930s that blossomed after the advent of lasers in the 1960s. When light strikes a molecule, most of its photons bounce off or pass directly through, but a tiny fraction — fewer than one in a trillion — are absorbed and re-emitted into another energy level that differs from their initial level. By measuring and analyzing these re-emitted photons through Raman spectroscopy, scientists can decipher the types of atoms in a molecule as well as their structural arrangement.

Scientists have created a number of techniques to boost Raman signals. In the new study, LANP graduate student Yu Zhang used one of these, a two-coherent-laser technique called “coherent anti-Stokes Raman spectroscopy,” or CARS. By using CARS in conjunction with a light amplifier made of four tiny gold nanodiscs, Halas and Zhang were able to measure single molecules in a powerful new way. LANP has dubbed the new technique “surface-enhanced CARS,” or SECARS.

“The two-coherent-laser setup in SECARS is important because the second laser provides further amplification,” Zhang said. “In a conventional single-laser setup, photons go through two steps of absorption and re-emission, and the optical signatures are usually amplified around 100 million to 10 billion times. By adding a second laser that is coherent with the first one, the SECARS technique employs a more complex multiphoton process.”

Zhang said the additional amplification gives SECARS the potential to address most unknown samples. That’s an added advantage over current techniques for single-molecule sensing, which generally require a prior knowledge about a molecule’s resonant frequency before it can be accurately measured.

Another key component of the SECARS process is the device’s optical amplifier, which contains four tiny gold discs in a precise diamond-shaped arrangement. The gap in the center of the four discs is about 15 nanometers wide. Owing to an optical effect called a “Fano resonance,” the optical signatures of molecules caught in that gap are dramatically amplified because of the efficient light harvesting and signal scattering properties of the four-disc structure.

Fano resonance requires a special geometric arrangement of the discs, and one of LANP’s specialties is the design, production and analysis of Fano-resonant plasmonic structures like the four-disc “quadrumer.” In previous LANP research, other geometric disc structures were used to create powerful optical processors.

Zhang said the quadrumer amplifiers are a key to SECARS, in part because they are created with standard e-beam lithographic techniques, which means they can be easily mass-produced.

“A 15-nanometer gap may sound small, but the gap in most competing devices is on the order of 1 nanometer,” Zhang said. “Our design is much more robust because even the smallest defect in a one-nanometer device can have significant effects. Moreover, the larger gap also results in a larger target area, the area where measurements take place. The target area in our device is hundreds of times larger than the target area in a one-nanometer device, and we can measure molecules anywhere in that target area, not just in the exact center.”

Halas, the Stanley C. Moore Professor in Electrical and Computer Engineering and a professor of biomedical engineering, chemistry, physics and astronomy at Rice, said the potential applications for SECARS include chemical and biological sensing as well as metamaterials research. She said scientific labs are likely be the first beneficiaries of the technology.

“Amplification is important for sensing small molecules because the smaller the molecule, the weaker the optical signature,” Halas said. “This amplification method is the most powerful yet demonstrated, and it could prove useful in experiments where existing techniques can’t provide reliable data.”

Here’s a link to and a citation for the paper,

Coherent anti-Stokes Raman scattering with single-molecule sensitivity using a plasmonic Fano resonance by Yu Zhang, Yu-Rong Zhen, Oara Neumann, Jared K. Day, Peter Nordlander & Naomi J. Halas. Nature Communications 5, Article number: 4424 doi:10.1038/ncomms5424 Published 14 July 2014

This paper is behind a paywall.

Squishy but rigid robots from MIT (Massachusetts Institute of Technology)

A July 14, 2014 news item on ScienceDaily MIT (Massachusetts Institute of Technology) features robots that mimic mice and other biological constructs or, if you prefer, movie robots,

In the movie “Terminator 2,” the shape-shifting T-1000 robot morphs into a liquid state to squeeze through tight spaces or to repair itself when harmed.

Now a phase-changing material built from wax and foam, and capable of switching between hard and soft states, could allow even low-cost robots to perform the same feat.

The material — developed by Anette Hosoi, a professor of mechanical engineering and applied mathematics at MIT, and her former graduate student Nadia Cheng, alongside researchers at the Max Planck Institute for Dynamics and Self-Organization and Stony Brook University — could be used to build deformable surgical robots. The robots could move through the body to reach a particular point without damaging any of the organs or vessels along the way.

A July 14, 2014 MIT news release (also on EurekAlert), which originated the news item, describes the research further by referencing both octopuses and jello,

Working with robotics company Boston Dynamics, based in Waltham, Mass., the researchers began developing the material as part of the Chemical Robots program of the Defense Advanced Research Projects Agency (DARPA). The agency was interested in “squishy” robots capable of squeezing through tight spaces and then expanding again to move around a given area, Hosoi says — much as octopuses do.

But if a robot is going to perform meaningful tasks, it needs to be able to exert a reasonable amount of force on its surroundings, she says. “You can’t just create a bowl of Jell-O, because if the Jell-O has to manipulate an object, it would simply deform without applying significant pressure to the thing it was trying to move.”

What’s more, controlling a very soft structure is extremely difficult: It is much harder to predict how the material will move, and what shapes it will form, than it is with a rigid robot.

So the researchers decided that the only way to build a deformable robot would be to develop a material that can switch between a soft and hard state, Hosoi says. “If you’re trying to squeeze under a door, for example, you should opt for a soft state, but if you want to pick up a hammer or open a window, you need at least part of the machine to be rigid,” she says.

Compressible and self-healing

To build a material capable of shifting between squishy and rigid states, the researchers coated a foam structure in wax. They chose foam because it can be squeezed into a small fraction of its normal size, but once released will bounce back to its original shape.

The wax coating, meanwhile, can change from a hard outer shell to a soft, pliable surface with moderate heating. This could be done by running a wire along each of the coated foam struts and then applying a current to heat up and melt the surrounding wax. Turning off the current again would allow the material to cool down and return to its rigid state.

In addition to switching the material to its soft state, heating the wax in this way would also repair any damage sustained, Hosoi says. “This material is self-healing,” she says. “So if you push it too far and fracture the coating, you can heat it and then cool it, and the structure returns to its original configuration.”

To build the material, the researchers simply placed the polyurethane foam in a bath of melted wax. They then squeezed the foam to encourage it to soak up the wax, Cheng says. “A lot of materials innovation can be very expensive, but in this case you could just buy really low-cost polyurethane foam and some wax from a craft store,” she says.

In order to study the properties of the material in more detail, they then used a 3-D printer to build a second version of the foam lattice structure, to allow them to carefully control the position of each of the struts and pores.

When they tested the two materials, they found that the printed lattice was more amenable to analysis than the polyurethane foam, although the latter would still be fine for low-cost applications, Hosoi says.

The wax coating could also be replaced by a stronger material, such as solder, she adds.

Hosoi is now investigating the use of other unconventional materials for robotics, such as magnetorheological and electrorheological fluids. These materials consist of a liquid with particles suspended inside, and can be made to switch from a soft to a rigid state with the application of a magnetic or electric field.

When it comes to artificial muscles for soft and biologically inspired robots, we tend to think of controlling shape through bending or contraction, says Carmel Majidi, an assistant professor of mechanical engineering in the Robotics Institute at Carnegie Mellon University, who was not involved in the research. “But for a lot of robotics tasks, reversibly tuning the mechanical rigidity of a joint can be just as important,” he says. “This work is a great demonstration of how thermally controlled rigidity-tuning could potentially be used in soft robotics.”

Here’s a link to and a citation for the paper,

Thermally Tunable, Self-Healing Composites for Soft Robotic Applications by Nadia G. Cheng, Arvind Gopinath, Lifeng Wang, Karl Iagnemma, and Anette E. Hosoi. Macromolecular Materials and Engineering DOI: 10.1002/mame.201400017 Article first published online: 30 JUN 2014

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Deadline extension (travel grants and poster abstracts) for alternate testing strategies (ATS) of nanomaterials workshop

It seems there have been a couple of deadline extensions (to August 1, 2014) for the September 15-16, 2014 ‘Workshop to Explore How a Multiple Models Approach can Advance Risk Analysis of Nanoscale Materials’ in Washington, DC (first mentioned in my July 10, 2014 posting featuring a description of the workshop). You can go here to submit a poster abstract (from any country) and you can go here if you’re a student or young professional (from any country) in search of a $500 travel award.

I managed to speak to one of the organizers, Lorraine Sheremeta, (Assistant Director, Ingenuity Lab, University of Alberta and co-author a July 9, 2014 Nanowerk Spotlight article about the workshop). Lorraine (Lori) kindly spoke to me about the upcoming workshop, which she described as an academic conference,.

As I understand what she told me, the hosts for the September 15-16, 2014 Workshop to Explore How a Multiple Models Approach can Advance Risk Analysis of Nanoscale Materials in Washington, DC want to attract a multidisciplinary group of people to grapple with a few questions. First, they want to establish a framework for establishing which are the best test methods for nanomaterials. Second, they are trying to move away from animal testing and want to establish which methods are equal to or better than animal testing. Thirdly, they want to discuss what they are going to do with the toxicological data  that we have  been collecting on nanomaterials for years now.

Or, as she and her colleague from the Society of Risk Analysis (Jo Anne Shatkin) have put in it in their Nanowerk Spotlight article:

… develop a report on the State of the Science for ATS for nanomaterials, catalogue of existing and emerging ATS [alternate testing strategies] methods in a database; and develop a case study to inform workshop deliberations and expert recommendations

The collaborative team behind this event includes, the University of Alberta’s Ingenuity Lab, the Society for Risk Analysis, Environment Canada, Health Canada, and the Organization for Economic Co-operation and Development (OECD) Working Party on Manufactured Nanomaterials (WPMN) .

The speaker lineup isn’t settled at this time although they have confirmed Vicki Stone of Heriot-Watt University in Scotland (from her university bio page),

Vicki Stone, Professor of Toxicology, studies the effects of nanomaterials on humans and environmentally relevant species.  Current research projects investigate the mechanism of toxicity of a range of nanomaterials in cells of the immune system (macrophages and neutrophils), liver (hepatocytes) , gastrointestinal tract, blood vessels (endothelium) and lung.  She is interested in interactions between nanomaterials, proteins and lipids, and how this influences subsequent toxicity.  Current projects also develop in vitro alternatives using microfluidics as well as high resolution imaging of individual nanomaterials in 3D and over time.  In addition Vicki collaborates with ecotoxicologists to investigate the impacts of nanomaterials on aquatic organisms. Vicki coordinated a European project to identify the research priorities to develop an intelligent testing strategy for nanomaterials (www.its-nano.eu).

Vicki is Director of the Nano Safety Research Group at Heriot-Watt University, Edinburgh, and Director of Toxicology for SAFENANO (www.safenano.org). She has acted as the Editor-in-chief of the journal Nanotoxicology (http://informahealthcare.com/nan) for 6 years (2006-2011). Vicki has also published over 130 publications pertaining to particle toxicology over the last 16 years and has provided evidence for the government commissioned reports published by the Royal Society (2003) and the on Environmental Pollution (2008).  Vicki was previously a member of the UK Government Committee on the Medical Effects of Air Pollution (COMEAP) and an advisory board member for the Center for the Environmental Implications of NanoTechnology (CEINT; funded by the US Environmental Protection Agency)).

A representative from PETA (People for the Ethical Treatment of Animals) will also be speaking. I believe that will be Amy Clippinger (from the PETA website’s Regulatory Testing webpage; scroll down about 70% of the way),

Science adviser Amy Clippinger has a Ph.D. in cellular and molecular biology and genetics and several years of research experience at the University of Pennsylvania.

PETA representatives have been to at least one other conference on the topic of nano, toxicology, and animal testing as per my April 24, 2014 posting about NANOTOX 2014 in Turkey,

Writing about nanotechnology can lead you in many different directions such as the news about PETA (People for the Ethical Treatment of Animals) and its poster presentation at the NanoTox 2014 conference being held in Antalya, Turkey from April 23 – 26, 2014. From the April 22, 2014 PETA news release on EurekAlert,

PETA International Science Consortium Ltd.’s nanotechnology expert will present a poster titled “A tiered-testing strategy for nanomaterial hazard assessment” at the 7th International Nanotoxicology Congress [NanoTox 2014] to be held April 23-26, 2014, in Antalya, Turkey.

Dr. Monita Sharma will outline a strategy consistent with the 2007 report from the US National Academy of Sciences, “Toxicity Testing in the 21st Century: A Vision and a Strategy,” which recommends use of non-animal methods involving human cells and cell lines for mechanistic pathway–based toxicity studies.

There is a lot of interest internationally in improving how we test for toxicity of nanomaterials. As well, the drive to eliminate or minimize as much as possible the use of animals in testing seems to be gaining momentum.

Good luck to everyone submitting a poster abstract and/or an application for a travel grant!

In case you don’t want to scroll up, the SRA nano workshop website is here.

Boron as a ‘buckyball’ or borospherene

First there was the borophene (like graphene but using boron rather than carbon) announcement from Brown University in my Jan. 28, 214 posting and now US (Brown University again) and Chinese researchers have developed a boron ‘buckyball’. Coincidentally, this announcement comes just after the 2014 World Cup final (July 13, 2014). Representations of buckyballs always resemble soccer balls. (Note: Germany won.)

From a July 14, 2014 news item on Azonano,

The discovery 30 years ago of soccer-ball-shaped carbon molecules called buckyballs helped to spur an explosion of nanotechnology research. Now, there appears to be a new ball on the pitch.

Researchers from Brown University, Shanxi University and Tsinghua University in China have shown that a cluster of 40 boron atoms forms a hollow molecular cage similar to a carbon buckyball. It’s the first experimental evidence that a boron cage structure—previously only a matter of speculation—does indeed exist.

“This is the first time that a boron cage has been observed experimentally,” said Lai-Sheng Wang, a professor of chemistry at Brown who led the team that made the discovery. “As a chemist, finding new molecules and structures is always exciting. The fact that boron has the capacity to form this kind of structure is very interesting.”

The researchers have provided an illustration of their borospherene,

The carbon buckyball has a boron cousin. A cluster for 40 boron atoms forms a hollow cage-like molecule. Courtesy Brown University

The carbon buckyball has a boron cousin. A cluster for 40 boron atoms forms a hollow cage-like molecule. Courtesy Brown University

A July 9, 2104 Brown University news release (also on EurekAlert), which originated the news item, describes the borosphene’s predecessor, the carbon buckyball, and provides more details about this new molecule,

Carbon buckyballs are made of 60 carbon atoms arranged in pentagons and hexagons to form a sphere — like a soccer ball. Their discovery in 1985 was soon followed by discoveries of other hollow carbon structures including carbon nanotubes. Another famous carbon nanomaterial — a one-atom-thick sheet called graphene — followed shortly after.

After buckyballs, scientists wondered if other elements might form these odd hollow structures. One candidate was boron, carbon’s neighbor on the periodic table. But because boron has one less electron than carbon, it can’t form the same 60-atom structure found in the buckyball. The missing electrons would cause the cluster to collapse on itself. If a boron cage existed, it would have to have a different number of atoms.

Wang and his research group have been studying boron chemistry for years. In a paper published earlier this year, Wang and his colleagues showed that clusters of 36 boron atoms form one-atom-thick disks, which might be stitched together to form an analog to graphene, dubbed borophene. Wang’s preliminary work suggested that there was also something special about boron clusters with 40 atoms. They seemed to be abnormally stable compared to other boron clusters.

Figuring out what that 40-atom cluster actually looks like required a combination of experimental work and modeling using high-powered supercomputers.

On the computer, Wang’s colleagues modeled over 10,000 possible arrangements of 40 boron atoms bonded to each other. The computer simulations estimate not only the shapes of the structures, but also estimate the electron binding energy for each structure — a measure of how tightly a molecule holds its electrons. The spectrum of binding energies serves as a unique fingerprint of each potential structure.

The next step is to test the actual binding energies of boron clusters in the lab to see if they match any of the theoretical structures generated by the computer. To do that, Wang and his colleagues used a technique called photoelectron spectroscopy.

Chunks of bulk boron are zapped with a laser to create vapor of boron atoms. A jet of helium then freezes the vapor into tiny clusters of atoms. The clusters of 40 atoms were isolated by weight then zapped with a second laser, which knocks an electron out of the cluster. The ejected electron flies down a long tube Wang calls his “electron racetrack.” The speed at which the electrons fly down the racetrack is used to determine the cluster’s electron binding energy spectrum — its structural fingerprint.

The experiments showed that 40-atom-clusters form two structures with distinct binding spectra. Those spectra turned out to be a dead-on match with the spectra for two structures generated by the computer models. One was a semi-flat molecule and the other was the buckyball-like spherical cage.

“The experimental sighting of a binding spectrum that matched our models was of paramount importance,” Wang said. “The experiment gives us these very specific signatures, and those signatures fit our models.”

The borospherene molecule isn’t quite as spherical as its carbon cousin. Rather than a series of five- and six-membered rings formed by carbon, borospherene consists of 48 triangles, four seven-sided rings and two six-membered rings. Several atoms stick out a bit from the others, making the surface of borospherene somewhat less smooth than a buckyball.

As for possible uses for borospherene, it’s a little too early to tell, Wang says. One possibility, he points out, could be hydrogen storage. Because of the electron deficiency of boron, borospherene would likely bond well with hydrogen. So tiny boron cages could serve as safe houses for hydrogen molecules.

But for now, Wang is enjoying the discovery.

“For us, just to be the first to have observed this, that’s a pretty big deal,” Wang said. “Of course if it turns out to be useful that would be great, but we don’t know yet. Hopefully this initial finding will stimulate further interest in boron clusters and new ideas to synthesize them in bulk quantities.”

The theoretical modeling was done with a group led by Prof. Si-Dian Li from Shanxi University and a group led by Prof. Jun Li from Tsinghua University. The work was supported by the U.S. National Science Foundation (CHE-1263745) and the National Natural Science Foundation of China.

Here’s a link to and a citation for the paper,

Observation of an all-boron fullerene by Hua-Jin Zhai, Ya-Fan Zhao, Wei-Li Li, Qiang Chen, Hui Bai, Han-Shi Hu, Zachary A. Piazza, Wen-Juan Tian, Hai-Gang Lu, Yan-Bo Wu, Yue-Wen Mu, Guang-Feng Wei, Zhi-Pan Liu, Jun Li, Si-Dian Li, & Lai-Sheng Wang. Nature Chemistry (2014) doi:10.1038/nchem.1999 Published online 13 July 2014

This paper is behind a paywall.

Better RRAM memory devices in the short term

Given my recent spate of posts about computing and the future of the chip (list to follow at the end of this post), this Rice University [Texas, US] research suggests that some improvements to current memory devices might be coming to the market in the near future. From a July 12, 2014 news item on Azonano,

Rice University’s breakthrough silicon oxide technology for high-density, next-generation computer memory is one step closer to mass production, thanks to a refinement that will allow manufacturers to fabricate devices at room temperature with conventional production methods.

A July 10, 2014 Rice University news release, which originated the news item, provides more detail,

Tour and colleagues began work on their breakthrough RRAM technology more than five years ago. The basic concept behind resistive memory devices is the insertion of a dielectric material — one that won’t normally conduct electricity — between two wires. When a sufficiently high voltage is applied across the wires, a narrow conduction path can be formed through the dielectric material.

The presence or absence of these conduction pathways can be used to represent the binary 1s and 0s of digital data. Research with a number of dielectric materials over the past decade has shown that such conduction pathways can be formed, broken and reformed thousands of times, which means RRAM can be used as the basis of rewritable random-access memory.

RRAM is under development worldwide and expected to supplant flash memory technology in the marketplace within a few years because it is faster than flash and can pack far more information into less space. For example, manufacturers have announced plans for RRAM prototype chips that will be capable of storing about one terabyte of data on a device the size of a postage stamp — more than 50 times the data density of current flash memory technology.

The key ingredient of Rice’s RRAM is its dielectric component, silicon oxide. Silicon is the most abundant element on Earth and the basic ingredient in conventional microchips. Microelectronics fabrication technologies based on silicon are widespread and easily understood, but until the 2010 discovery of conductive filament pathways in silicon oxide in Tour’s lab, the material wasn’t considered an option for RRAM.

Since then, Tour’s team has raced to further develop its RRAM and even used it for exotic new devices like transparent flexible memory chips. At the same time, the researchers also conducted countless tests to compare the performance of silicon oxide memories with competing dielectric RRAM technologies.

“Our technology is the only one that satisfies every market requirement, both from a production and a performance standpoint, for nonvolatile memory,” Tour said. “It can be manufactured at room temperature, has an extremely low forming voltage, high on-off ratio, low power consumption, nine-bit capacity per cell, exceptional switching speeds and excellent cycling endurance.”

In the latest study, a team headed by lead author and Rice postdoctoral researcher Gunuk Wang showed that using a porous version of silicon oxide could dramatically improve Rice’s RRAM in several ways. First, the porous material reduced the forming voltage — the power needed to form conduction pathways — to less than two volts, a 13-fold improvement over the team’s previous best and a number that stacks up against competing RRAM technologies. In addition, the porous silicon oxide also allowed Tour’s team to eliminate the need for a “device edge structure.”

“That means we can take a sheet of porous silicon oxide and just drop down electrodes without having to fabricate edges,” Tour said. “When we made our initial announcement about silicon oxide in 2010, one of the first questions I got from industry was whether we could do this without fabricating edges. At the time we could not, but the change to porous silicon oxide finally allows us to do that.”

Wang said, “We also demonstrated that the porous silicon oxide material increased the endurance cycles more than 100 times as compared with previous nonporous silicon oxide memories. Finally, the porous silicon oxide material has a capacity of up to nine bits per cell that is highest number among oxide-based memories, and the multiple capacity is unaffected by high temperatures.”

Tour said the latest developments with porous silicon oxide — reduced forming voltage, elimination of need for edge fabrication, excellent endurance cycling and multi-bit capacity — are extremely appealing to memory companies.

“This is a major accomplishment, and we’ve already been approached by companies interested in licensing this new technology,” he said.

Here’s a link to and a citation for the paper,

Nanoporous Silicon Oxide Memory by Gunuk Wang, Yang Yang, Jae-Hwang Lee, Vera Abramova, Huilong Fei, Gedeng Ruan, Edwin L. Thomas, and James M. Tour. Nano Lett., Article ASAP DOI: 10.1021/nl501803s Publication Date (Web): July 3, 2014

Copyright © 2014 American Chemical Society

This paper is behind a paywall.

As for my recent spate of posts on computers and chips, there’s a July 11, 2014 posting about IBM, a 7nm chip, and much more; a July 9, 2014 posting about Intel and its 14nm low-power chip processing and plans for a 10nm chip; and, finally, a June 26, 2014 posting about HP Labs and its plans for memristive-based computing and their project dubbed ‘The Machine’.

Writing and AI or is a robot writing this blog?

In an interview almost 10 years ago for an article I was writing for a digital publishing magazine, I had a conversation with a very technically oriented individually that went roughly this way,

Him: (enthused and excited) We’re developing algorithms that will let us automatically create brochures, written reports, that will always have the right data and can be instantly updated.

Me: (pause)

Him: (no reaction)

Me: (breaking long pause) You realize you’re talking to a writer, eh? You’ve just told me that at some point in the future nobody will need writers.

Him: (pause) No. (then with more certainty) No. You don’t understand. We’re making things better for you. In the future, you won’t need to do the boring stuff.

It seems the future is now and in the hands of a company known as Automated Insights, You can find this at the base of one of the company’s news releases,

ABOUT AUTOMATED INSIGHTS, INC.

Automated Insights (Ai) transforms Big Data into written reports with the depth of analysis, personality and variability of a human writer. In 2014, Ai and its patented Wordsmith platform will produce over 1 billion personalized reports for clients like Yahoo!, The Associated Press, the NFL, and Edmunds.com. [emphasis mine] The Wordsmith platform uses artificial intelligence to dynamically spot patterns and trends in raw data and then describe those findings in plain English. Wordsmith authors insightful, personalized reports around individual user data at unprecedented scale and in real-time. Automated Insights also offers applications that run on its Wordsmith platform, including the recently launched Wordsmith for Marketing, which enables marketing agencies to automate reporting for clients. Learn more at http://automatedinsights.com.

In the wake of the June 30, 2014 deal with Associated Press, there has been a flurry of media interest especially from writers who seem to have largely concluded that the robots will do the boring stuff and free human writers to do creative, innovative work. A July 2, 2014 news item on FoxNews.com provides more details about the deal,

The Associated Press, the largest American-based news agency in the world, will now use story-writing software to produce U.S. corporate earnings stories.

In a recent blog post post AP Managing Editor Lou Ferarra explained that the software is capable of producing these stories, which are largely technical financial reports that range from 150 to 300 words, in “roughly the same time that it takes our reporters.” [emphasis mine]

AP staff members will initially edit the software-produced reports, but the agency hopes the process will soon be fully automated.

The Wordsmith software constructs narratives in plain English by using algorithms to analyze trends and patterns in a set of data and place them in an appropriate context depending on the nature of the story.

Representatives for the Associated Press have assured anyone who fears robots are making journalists obsolete that Wordsmith will not be taking the jobs of staffers. “We are going to use our brains and time in more enterprising ways during earnings season” Ferarra wrote, in the blog pos. “This is about using technology to free journalists to do more journalism and less data processing, not about eliminating jobs. [emphasis mine]

Russell Brandon’s July 11, 2014 article for The Verge provides more technical detail and context for this emerging field,

Last week, the Associated Press announced it would be automating its articles on quarterly earnings reports. Instead of 300 articles written by humans, the company’s new software will write 4,400 of them, each formatted for AP style, in mere seconds. It’s not the first time a company has tried out automatic writing: last year, a reporter at The LA Times wrote an automated earthquake-reporting program that combined prewritten sentences with automatic seismograph reports to report quakes just seconds after they happen. The natural language-generation company Narrative Science has been churning out automated sports reporting for years.

It appears that AP Managing Editor Lou Ferarra doesn’t know how long it takes to write 150 to 300 words (“roughly the same time that it takes our reporters”) or perhaps he or she wanted to ‘soften’ the news’s possible impact. Getting back to the technical aspects in Brandon’s article,

… So how do you make a robot that writes sentences?

In the case of AP style, a lot of the work has already been done. Every Associated Press article already comes with a clear, direct opening and a structure that spirals out from there. All the algorithm needs to do is code in the same reasoning a reporter might employ. Algorithms detect the most volatile or newsworthy shift in a given earnings report and slot that in as the lede. Circling outward, the program might sense that a certain topic has already been covered recently and decide it’s better to talk about something else. …

The staffers who keep the copy fresh are scribes and coders in equal measure. (Allen [Automated Insights CEO Robbie Allen] says he looks for “stats majors who worked on the school paper.”) They’re not writers in the traditional sense — most of the language work is done beforehand, long before the data is available — but each job requires close attention. For sports articles, the Automated Insights team does all its work during the off-season and then watches the articles write themselves from the sidelines, as soon as each game’s results are available. “I’m often quite surprised by the result,” says Joe Procopio, the company’s head of product engineering. “There might be four or five variables that determine what that lead sentence looks like.” …

A July 11, 2014 article by Catherine Taibi for Huffington Post offers a summary of the current ‘robot/writer’ situation (Automated Insights is not the only company offering this service) along with many links including one to this July 11, 2014 article by Kevin Roose for New York Magazine where he shares what appears to be a widely held opinion and which echoes my interviewee of 10 years ago (Note: A link has been removed),

By this point, we’re no longer surprised when machines replace human workers in auto factories or electronics-manufacturing plants. That’s the norm. But we hoity-toity journalists had long assumed that our jobs were safe from automation. (We’re knowledge workers, after all.) So when the AP announced its new automated workforce, you could hear the panic spread to old-line news desks across the nation. Unplug the printers, Bob! The robots are coming!

I’m not an alarmist, though. In fact, I welcome our new robot colleagues. Not only am I not scared of losing my job to a piece of software, I think the introduction of automated reporting is the best thing to happen to journalists in a long time.

For one thing, humans still have the talent edge. At the moment, the software created by Automated Insights is only capable of generating certain types of news stories — namely, short stories that use structured data as an input, and whose output follows a regular pattern. …

Robot-generated stories aren’t all fill-in-the-blank jobs; the more advanced algorithms use things like perspective, tone, and humor to tailor a story to its audience. …

But these robots, as sophisticated as they are, can’t approach the full creativity of a human writer. They can’t contextualize Emmy snubs like Matt Zoller Seitz, assail opponents of Obamacare like Jonathan Chait, or collect summer-camp sex stories like Maureen O’Connor. My colleagues’ jobs (and mine, knock wood) are too complex for today’s artificial intelligence to handle; they require human skills like picking up the phone, piecing together data points from multiple sources, and drawing original, evidence-based conclusions. [emphasis mine]

The stories that today’s robots can write are, frankly, the kinds of stories that humans hate writing anyway. … [emphasis mine]

Despite his blithe assurances, there is a little anxiety expressed in this piece “My colleagues’ jobs (and mine, knock wood) are too complex for today’s artificial intelligence … .”

I too am feeling a little uncertain. For example, there’s this April 29, 2014 posting by Adam Long on the Automated Insights blog and I can’t help wondering how much was actually written by Long and how much by the company’s robots. After all the company proudly proclaims the blog is powered by Wordsmith Marketing. For that matter, I’m not that sure about the FoxNews.com piece, which has no byline.

For anyone interested in still more links and information, Automated Insights offers a listing of their press coverage here. Although it’s a bit dated now, there is an exhaustive May 22, 2013 posting by Tony Hirst on the OUseful.info blog which, despite the title: ‘Notes on Narrative Science and Automated Insights’, provides additional context for the work being done to automate the writing process since 2009.

For the record, this blog is not written by a robot. As for getting rid of the boring stuff, I can’t help but remember that part of how one learns any craft is by doing the boring, repetitive work needed to build skills.

One final and unrelated note, Automated Insights has done a nice piece of marketing with its name which abbreviates to Ai. One can’t help but be reminded of AI, a term connoting the field of artificial intelligence.

US military wants you to remember

While this July 10, 2014 news item on ScienceDaily concerns DARPA, an implantable neural device, and the Lawrence Livermore National Laboratory (LLNL), it is a new project and not the one featured here in a June 18, 2014 posting titled: ‘DARPA (US Defense Advanced Research Projects Agency) awards funds for implantable neural interface’.

The new project as per the July 10, 2014 news item on ScienceDaily concerns memory,

The Department of Defense’s Defense Advanced Research Projects Agency (DARPA) awarded Lawrence Livermore National Laboratory (LLNL) up to $2.5 million to develop an implantable neural device with the ability to record and stimulate neurons within the brain to help restore memory, DARPA officials announced this week.

The research builds on the understanding that memory is a process in which neurons in certain regions of the brain encode information, store it and retrieve it. Certain types of illnesses and injuries, including Traumatic Brain Injury (TBI), Alzheimer’s disease and epilepsy, disrupt this process and cause memory loss. TBI, in particular, has affected 270,000 military service members since 2000.

A July 2, 2014 LLNL news release, which originated the news item, provides more detail,

The goal of LLNL’s work — driven by LLNL’s Neural Technology group and undertaken in collaboration with the University of California, Los Angeles (UCLA) and Medtronic — is to develop a device that uses real-time recording and closed-loop stimulation of neural tissues to bridge gaps in the injured brain and restore individuals’ ability to form new memories and access previously formed ones.

Specifically, the Neural Technology group will seek to develop a neuromodulation system — a sophisticated electronics system to modulate neurons — that will investigate areas of the brain associated with memory to understand how new memories are formed. The device will be developed at LLNL’s Center for Bioengineering.

“Currently, there is no effective treatment for memory loss resulting from conditions like TBI,” said LLNL’s project leader Satinderpall Pannu, director of the LLNL’s Center for Bioengineering, a unique facility dedicated to fabricating biocompatible neural interfaces. …

LLNL will develop a miniature, wireless and chronically implantable neural device that will incorporate both single neuron and local field potential recordings into a closed-loop system to implant into TBI patients’ brains. The device — implanted into the entorhinal cortex and hippocampus — will allow for stimulation and recording from 64 channels located on a pair of high-density electrode arrays. The entorhinal cortex and hippocampus are regions of the brain associated with memory.

The arrays will connect to an implantable electronics package capable of wireless data and power telemetry. An external electronic system worn around the ear will store digital information associated with memory storage and retrieval and provide power telemetry to the implantable package using a custom RF-coil system.

Designed to last throughout the duration of treatment, the device’s electrodes will be integrated with electronics using advanced LLNL integration and 3D packaging technologies. The microelectrodes that are the heart of this device are embedded in a biocompatible, flexible polymer.

Using the Center for Bioengineering’s capabilities, Pannu and his team of engineers have achieved 25 patents and many publications during the last decade. The team’s goal is to build the new prototype device for clinical testing by 2017.

Lawrence Livermore’s collaborators, UCLA and Medtronic, will focus on conducting clinical trials and fabricating parts and components, respectively.

“The RAM [Restoring Active Memory] program poses a formidable challenge reaching across multiple disciplines from basic brain research to medicine, computing and engineering,” said Itzhak Fried, lead investigator for the UCLA on this project and  professor of neurosurgery and psychiatry and biobehavioral sciences at the David Geffen School of Medicine at UCLA and the Semel Institute for Neuroscience and Human Behavior. “But at the end of the day, it is the suffering individual, whether an injured member of the armed forces or a patient with Alzheimer’s disease, who is at the center of our thoughts and efforts.”

LLNL’s work on the Restoring Active Memory program supports [US] President [Barack] Obama’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative.

Obama’s BRAIN is picking up speed.

High frequency sound waves enable precision micro- and nanomanufacturing

I have finally moved this item to the top of my playlist: researchers from RMIT University (formerly the Royal Melbourne Institute of Technology) in Australia have developed a technique employing sound waves for greater precision in manufacturing chips at the micro- and nanoscales. From a June 24, 2014 news item on ScienceDaily,

In a breakthrough discovery, researchers at RMIT University in Melbourne, Australia, have harnessed the power of sound waves to enable precision micro- and nano-manufacturing.

The researchers have demonstrated how high-frequency sound waves can be used to precisely control the spread of thin film fluid along a specially-designed chip, in a paper published today in Proceedings of the Royal Society A.

With thin film technology the bedrock of microchip and microstructure manufacturing, the pioneering research offers a significant advance — potential applications range from thin film coatings for paint and wound care to 3D printing, micro-casting and micro-fluidics.

A June 30, 2014 RMIT university news release, which originated the news item (despite the date discrepancy), offers more details (Note: Links have been removed),

Professor James Friend, Director of the MicroNano Research Facility at RMIT, said the researchers had developed a portable system for precise, fast and unconventional micro- and nano-fabrication.

“By tuning the sound waves, we can create any pattern we want on the surface of a microchip,” Professor Friend said.

“Manufacturing using thin film technology currently lacks precision – structures are physically spun around to disperse the liquid and coat components with thin film.

“We’ve found that thin film liquid either flows towards or away from high-frequency sound waves, depending on its thickness.

“We not only discovered this phenomenon but have also unravelled the complex physics behind the process, enabling us to precisely control and direct the application of thin film liquid at a micro and nano-scale.”

Professor Friend led the research team behind the breakthrough, which included Dr Amgad Rezk, from the School of Civil, Environmental and Chemical Engineering, Professor Leslie Yeo, co-Director of the Micro Nanophysics Research Laboratory, and Ofer Manor, from the Israel Institute of Technology.

The research was part of Dr Rezk’s recently completed PhD, in the School of Electrical and Computer Engineering.

The new process, which the researchers have called “acoustowetting”, works on a chip made of lithium niobate – a piezoelectric material capable of converting electrical energy into mechanical pressure.

The surface of the chip is covered with microelectrodes and the chip is connected to a power source, with the power converted to high-frequency sound waves. Thin film liquid is added to the surface of the chip, and the sound waves are then used to control its flow.

The research shows that when the liquid is ultra-thin – at nano and sub-micro depths – it flows away from the high-frequency sound waves.

The flow reverses at slightly thicker dimensions, moving towards the sound waves. But at a millimetre or more in depth, the flow reverses again, moving away.

Here’s a link to and a citation for the paper,

Double flow reversal in thin liquid films driven by megahertz-order surface vibration by Amgad R. Rezk, Ofer Manor, Leslie Y. Yeo, and James R. Friend. Proc. R. Soc. A 8 September 2014 vol. 470 no. 2169 20130765 Published 25 June 2014 doi: 10.1098/rspa.2013.0765

This paper is open access.

The researchers have produced this video illustrating the action of the sound waves,