Tag Archives: IBM

IBM, the Cognitive Era, and carbon nanotube electronics

IBM has a storied position in the field of nanotechnology due to the scanning tunneling microscope developed in the company’s laboratories. It was a Nobel Prize-winning breakthough which provided the impetus for nanotechnology applied research. Now, an Oct. 1, 2015 news item on Nanowerk trumpets another IBM breakthrough,

IBM Research today [Oct. 1, 2015] announced a major engineering breakthrough that could accelerate carbon nanotubes replacing silicon transistors to power future computing technologies.

IBM scientists demonstrated a new way to shrink transistor contacts without reducing performance of carbon nanotube devices, opening a pathway to dramatically faster, smaller and more powerful computer chips beyond the capabilities of traditional semiconductors.

While the Oct. 1, 2015 IBM news release, which originated the news item, does go on at length there’s not much technical detail (see the second to last paragraph in the excerpt for the little they do include) about the research breakthrough (Note: Links have been removed),

IBM’s breakthrough overcomes a major hurdle that silicon and any semiconductor transistor technologies face when scaling down. In any transistor, two things scale: the channel and its two contacts. As devices become smaller, increased contact resistance for carbon nanotubes has hindered performance gains until now. These results could overcome contact resistance challenges all the way to the 1.8 nanometer node – four technology generations away. [emphasis mine]

Carbon nanotube chips could greatly improve the capabilities of high performance computers, enabling Big Data to be analyzed faster, increasing the power and battery life of mobile devices and the Internet of Things, and allowing cloud data centers to deliver services more efficiently and economically.

Silicon transistors, tiny switches that carry information on a chip, have been made smaller year after year, but they are approaching a point of physical limitation. With Moore’s Law running out of steam, shrinking the size of the transistor – including the channels and contacts – without compromising performance has been a vexing challenge troubling researchers for decades.

IBM has previously shown that carbon nanotube transistors can operate as excellent switches at channel dimensions of less than ten nanometers – the equivalent to 10,000 times thinner than a strand of human hair and less than half the size of today’s leading silicon technology. IBM’s new contact approach overcomes the other major hurdle in incorporating carbon nanotubes into semiconductor devices, which could result in smaller chips with greater performance and lower power consumption.

Earlier this summer, IBM unveiled the first 7 nanometer node silicon test chip [emphasis mine], pushing the limits of silicon technologies and ensuring further innovations for IBM Systems and the IT industry. By advancing research of carbon nanotubes to replace traditional silicon devices, IBM is paving the way for a post-silicon future and delivering on its $3 billion chip R&D investment announced in July 2014.

“These chip innovations are necessary to meet the emerging demands of cloud computing, Internet of Things and Big Data systems,” said Dario Gil, vice president of Science & Technology at IBM Research. “As silicon technology nears its physical limits, new materials, devices and circuit architectures must be ready to deliver the advanced technologies that will be required by the Cognitive Computing era. This breakthrough shows that computer chips made of carbon nanotubes will be able to power systems of the future sooner than the industry expected.”

A New Contact for Carbon Nanotubes

Carbon nanotubes represent a new class of semiconductor materials that consist of single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device whose superior electrical properties promise several generations of technology scaling beyond the physical limits of silicon.

Electrons in carbon transistors can move more easily than in silicon-based devices, and the ultra-thin body of carbon nanotubes provide additional advantages at the atomic scale. Inside a chip, contacts are the valves that control the flow of electrons from metal into the channels of a semiconductor. As transistors shrink in size, electrical resistance increases within the contacts, which impedes performance. Until now, decreasing the size of the contacts on a device caused a commensurate drop in performance – a challenge facing both silicon and carbon nanotube transistor technologies.

IBM researchers had to forego traditional contact schemes and invented a metallurgical process akin to microscopic welding that chemically binds the metal atoms to the carbon atoms at the ends of nanotubes. This ‘end-bonded contact scheme’ allows the contacts to be shrunken down to below 10 nanometers without deteriorating performance of the carbon nanotube devices.

“For any advanced transistor technology, the increase in contact resistance due to the decrease in the size of transistors becomes a major performance bottleneck,” Gil added. “Our novel approach is to make the contact from the end of the carbon nanotube, which we show does not degrade device performance. This brings us a step closer to the goal of a carbon nanotube technology within the decade.”

Every once in a while, the size gets to me and a 1.8nm node is amazing. As for IBM’s 7nm chip, which was previewed this summer, there’s more about that in my July 15, 2015 posting.

Here’s a link to and a citation for the IBM paper,

End-bonded contacts for carbon nanotube transistors with low, size-independent resistance by Qing Cao, Shu-Jen Han, Jerry Tersoff, Aaron D. Franklin†, Yu Zhu, Zhen Zhang‡, George S. Tulevski, Jianshi Tang, and Wilfried Haensch. Science 2 October 2015: Vol. 350 no. 6256 pp. 68-72 DOI: 10.1126/science.aac8006

This paper is behind a paywall.

Molecules (arynes) seen for first time in 113 years

Arynes were first theorized in 1902 and they’ve been used as building blocks to synthesize a variety of compounds but they’re existence hasn’t been confirmed until now.

AFM image of an aryne molecule imaged with a CO tip. Courtesy: IBM

AFM image of an aryne molecule imaged with a CO tip. Courtesy: IBM

A July 13, 2015 news item in Nanowerk makes the announcement (Note: A link has been removed),

chemistry teachers and students can breath a sigh of relief. After teaching and learning about a particular family of molecules for decades, scientists have finally proven that they do in fact exist.

In a new paper published online today in Nature Chemistry (“On-surface generation and imaging of arynes by atomic force microscopy”), scientists from IBM Research and CIQUS at the University of Santiago de Compostela, Spain, have confirmed the existence and characterized the structure of arynes, a family of highly-reactive short-lived molecules which was first suggested 113 years ago. The technique has broad applications for on-surface chemistry and electronics, including the preparation of graphene nanoribbons and novel single-molecule devices.

A July 13, 2015 IBM news release by Chris Sciacca, which originated the news item, describes arynes and the imaging process used to capture them for the first time (Note: Links have been removed),

“Arynes are discussed in almost every undergraduate course on organic chemistry around the world. Therefore, it’s kind of a relief to find the final confirmation that these molecules truly exist,” said Prof. Diego Peña, a chemist at the University of Santiago de Compostela.

“I look forward to seeing new chemical challenges solved by the combination of organic synthesis and atomic force microscopy.”

There are trillions of molecules in the universe and some of them are stable enough to be isolated and characterized, but many others are so short-lived that they can only be proposed indirectly, via chemical reactions or spectroscopic methods.

One such species are arynes, which were first suggested in 1902, and since then have been used as intermediates or building blocks in the synthesis of a variety of compounds for applications including medicine, organic electronics and molecular materials. The challenge with these particular molecules is that they only exist for several milliseconds making them extremely challenging to image, until now.

The imaging was accomplished by means of atomic force microscopy (AFM), a scanning technique that can accomplish nanometer-level resolution. After the preparation of the key aryne precursor by CIQUS, IBM scientists used the sharp tip of a scanning tunneling microscope (STM) to generate individual aryne molecules from precursor molecules by atomic manipulation. The experiments were performed on films of sodium chloride, at temperatures near absolute zero, to stabilize the aryne.

Once the molecules were isolated, the team used AFM to measure the tiny forces between the STM tip, which is terminated with a single carbon monoxide molecule, and the sample to image the aryne’s molecular structure. The resulting image was so clear that the scientists could study their chemical nature based on the minute differences between individual bonds.

“Our team has developed several state-of-the-art techniques since 2009, which made this achievement possible,” said Dr. Niko Pavliček, a physicist at IBM Research – Zurich and lead author of the paper. “For this study, it was absolutely essential to pick an insulating film on which the molecules were adsorbed and to deliberately choose the atomic tip-terminations to probe them. We hope this technique will have profound effects on the future of chemistry and electronics.”

Prof. Peña, added that “These findings on arynes can be compared with the long-standing search for the giant squid. For centuries, fishermen had found clues of the existence of this legendary animal. But it was only very recently that scientists managed to film a giant squid alive. In both cases, state-of-the-art technologies were crucial to observe these elusive species alive: a low-noise submarine for the giant squid; a low-temperature AFM for the aryne.”

This research is part of IBM’s five-year, $3 billion investment to push the limits of chip technology and semiconductor innovations needed to meet the emerging demands of cloud computing and Big Data systems.

This work is a result of the large European project called (Planar Atomic and Molecular Scale Devices). PAMS’ main objective is to develop and investigate novel electronic devices of nanometric-scale size. Part of this research is also funded by a European Research Council Advanced Grant awarded to IBM scientist Gerhard Meyer, who is also a co-author of the paper.

Here’s a link to and a citation for the paper,

On-surface generation and imaging of arynes by atomic force microscopy by Niko Pavliček, Bruno Schuler, Sara Collazos, Nikolaj Moll, Dolores Pérez, Enrique Guitián, Gerhard Meyer, Diego Peña, & Leo Gross. Nature Chemistry (2015) doi:10.1038/nchem.2300 Published online 13 July 2015

This paper is behind a paywall.

IBM and its working 7nm test chip

I wrote abut IBM and its plans for a 7nm computer chip last year in a July 11, 2014 posting, which featured IBM and mention of HP Labs and other company’s plans for shrinking their computer chips. Almost one year later, IBM has announced, in a July 9, 2015 IBM news release on PRnewswire.com the accomplishment of a working 7nm test chip,

An alliance led by IBM Research (NYSE: IBM) today announced that it has produced the semiconductor industry’s first 7nm (nanometer) node test chips with functioning transistors.  The breakthrough, accomplished in partnership with GLOBALFOUNDRIES and Samsung at SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering (SUNY Poly CNSE), could result in the ability to place more than 20 billion tiny switches — transistors — on the fingernail-sized chips that power everything from smartphones to spacecraft.

To achieve the higher performance, lower power and scaling benefits promised by 7nm technology, researchers had to bypass conventional semiconductor manufacturing approaches. Among the novel processes and techniques pioneered by the IBM Research alliance were a number of industry-first innovations, most notably Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels.

Industry experts consider 7nm technology crucial to meeting the anticipated demands of future cloud computing and Big Data systems, cognitive computing, mobile products and other emerging technologies. Part of IBM’s $3 billion, five-year investment in chip R&D (announced in 2014), this accomplishment was made possible through a unique public-private partnership with New York State and joint development alliance with GLOBALFOUNDRIES, Samsung and equipment suppliers. The team is based at SUNY Poly’s NanoTech Complex in Albany [New York state].

“For business and society to get the most out of tomorrow’s computers and devices, scaling to 7nm and beyond is essential,” said Arvind Krishna, senior vice president and director of IBM Research. “That’s why IBM has remained committed to an aggressive basic research agenda that continually pushes the limits of semiconductor technology. Working with our partners, this milestone builds on decades of research that has set the pace for the microelectronics industry, and positions us to advance our leadership for years to come.”

Microprocessors utilizing 22nm and 14nm technology power today’s servers, cloud data centers and mobile devices, and 10nm technology is well on the way to becoming a mature technology. The IBM Research-led alliance achieved close to 50 percent area scaling improvements over today’s most advanced technology, introduced SiGe channel material for transistor performance enhancement at 7nm node geometries, process innovations to stack them below 30nm pitch and full integration of EUV lithography at multiple levels. These techniques and scaling could result in at least a 50 percent power/performance improvement for next generation mainframe and POWER systems that will power the Big Data, cloud and mobile era.

“Governor Andrew Cuomo’s trailblazing public-private partnership model is catalyzing historic innovation and advancement. Today’s [July 8, 2015] announcement is just one example of our collaboration with IBM, which furthers New York State’s global leadership in developing next generation technologies,” said Dr. Michael Liehr, SUNY Poly Executive Vice President of Innovation and Technology and Vice President of Research.  “Enabling the first 7nm node transistors is a significant milestone for the entire semiconductor industry as we continue to push beyond the limitations of our current capabilities.”

“Today’s announcement marks the latest achievement in our long history of collaboration to accelerate development of next-generation technology,” said Gary Patton, CTO and Head of Worldwide R&D at GLOBALFOUNDRIES. “Through this joint collaborative program based at the Albany NanoTech Complex, we are able to maintain our focus on technology leadership for our clients and partners by helping to address the development challenges central to producing a smaller, faster, more cost efficient generation of semiconductors.”

The 7nm node milestone continues IBM’s legacy of historic contributions to silicon and semiconductor innovation. They include the invention or first implementation of the single cell DRAM, the Dennard Scaling Laws, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator, strained engineering, multi core microprocessors, immersion lithography, high speed SiGe, High-k gate dielectrics, embedded DRAM, 3D chip stacking and Air gap insulators.

In 2014, they were talking about carbon nanotubes with regard to the 7nm chip, this shift to silicon germanium is interesting.

Sebastian Anthony in a July 9, 2015 article for Ars Technica offers some intriguing insight into the accomplishment and the technology (Note: A link has been removed),

… While it should be stressed that commercial 7nm chips remain at least two years away, this test chip from IBM and its partners is extremely significant for three reasons: it’s a working sub-10nm chip (this is pretty significant in itself); it’s the first commercially viable sub-10nm FinFET logic chip that uses silicon-germanium as the channel material; and it appears to be the first commercially viable design produced with extreme ultraviolet (EUV) lithography.

Technologically, SiGe and EUV are both very significant. SiGe has higher electron mobility than pure silicon, which makes it better suited for smaller transistors. The gap between two silicon nuclei is about 0.5nm; as the gate width gets ever smaller (about 7nm in this case), the channel becomes so small that the handful of silicon atoms can’t carry enough current. By mixing some germanium into the channel, electron mobility increases, and adequate current can flow. Silicon generally runs into problems at sub-10nm nodes, and we can expect Intel and TSMC to follow a similar path to IBM, GlobalFoundries, and Samsung (aka the Common Platform alliance).

EUV lithography is an more interesting innovation. Basically, as chip features get smaller, you need a narrower beam of light to etch those features accurately, or you need to use multiple patterning (which we won’t go into here). The current state of the art for lithography is a 193nm ArF (argon fluoride) laser; that is, the wavelength is 193nm wide. Complex optics and multiple painstaking steps are required to etch 14nm features using a 193nm light source. EUV has a wavelength of just 13.5nm, which will handily take us down into the sub-10nm realm, but so far it has proven very difficult and expensive to deploy commercially (it has been just around the corner for quite a few years now).

If you’re interested in the nuances, I recommend reading Anthony’s article in its entirety.

One final comment, there was no discussion of electrodes or other metallic components associated with computer chips. The metallic components are a topic of some interest to me (anyway), given some research published by scientists at the Massachusetts Institute of Technology (MIT) last year. From my Oct. 14, 2014 posting,

Research from the Massachusetts Institute of Technology (MIT) has revealed a new property of metal nanoparticles, in this case, silver. From an Oct. 12, 2014 news item on ScienceDaily,

A surprising phenomenon has been found in metal nanoparticles: They appear, from the outside, to be liquid droplets, wobbling and readily changing shape, while their interiors retain a perfectly stable crystal configuration.

The research team behind the finding, led by MIT professor Ju Li, says the work could have important implications for the design of components in nanotechnology, such as metal contacts for molecular electronic circuits. [my emphasis added]

This discovery and others regarding materials and phase changes at ever diminishing sizes hint that a computer with a functioning 7nm chip might be a bit further off than IBM is suggesting.

The nanostructure of cellulose at the University of Melbourne (Australia)

This is not the usual kind of nanocellulose story featured here as it doesn’t concern a nanocellulose material. Instead, this research focuses on the structure of cellulose at the nanoscale. From a May 21, 2015 news item on Nanotechnology Now,

Scientists from IBM Research and the Universities of Melbourne and Queensland have moved a step closer to identifying the nanostructure of cellulose — the basic structural component of plant cell walls.

The insights could pave the way for more disease resistant varieties of crops and increase the sustainability of the pulp, paper and fibre industry — one of the main uses of cellulose.

A May 21, 2015 University of Melbourne press release, which originated the news item, describes some of the difficulties of analyzing cellulose at the nanoscale and the role that IBM computer played in overcoming them,

Tapping into IBM’s supercomputing power, researchers have been able to model the structure and dynamics of cellulose at the molecular level.

Dr Monika Doblin, Research Fellow and Deputy Node Leader at the School of BioSciences at the University of Melbourne said cellulose is a vital part of the plant’s structure, but its synthesis is yet to be fully understood.

“It’s difficult to work on cellulose synthesis in vitro because once plant cells are broken open, most of the enzyme activity is lost, so we needed to find other approaches to study how it is made,” Dr Doblin said.

“Thanks to IBM’s expertise in molecular modelling and VLSCI’s computational power, we have been able to create models of the plant wall at the molecular level which will lead to new levels of understanding about the formation of cellulose.”

The work, which was described in a recent scientific paper published in Plant Physiology, represents a significant step towards our understanding of cellulose biosynthesis and how plant cell walls assemble and function.

The research is part of a longer-term program at the Victorian Life Sciences Computation Initiative (VLSCI) to develop a 3D computer-simulated model of the entire plant wall.

Cellulose represents one of the most abundant organic compounds on earth with an estimated 180 billion tonnes produced by plants each year.

A plant makes cellulose by linking simple units of glucose together to form chains, which are then bundled together to form fibres. These fibres then wrap around the cell as the major component of the plant cell wall, providing rigidity, flexibility and defence against internal and external stresses.

Until now, scientists have been challenged with detailing the structure of plant cell walls due to the complexity of the work and the invasive nature of traditional physical methods which often cause damage to the plant cells.

Dr John Wagner, Manager of Computational Sciences, IBM Research – Australia, called it a ‘pioneering project’.

“We are bringing IBM Research’s expertise in computational biology, big data and smarter agriculture to bear in a large-scale, collaborative Australian science project with some of the brightest minds in the field. We are a keen supporter of the Victorian Life Sciences Computation Initiative and we’re very excited to see the scientific impact this work is now having.”

Using the IBM Blue Gene/Q supercomputer at VLSCI, known as Avoca, scientists were able to perform the quadrillions of calculations required to model the motions of cellulose atoms.

The research shows that within the cellulose structure, there are between 18 and 24 chains present within an elementary microfibril, much less than the 36 chains that had previously been assumed.

IBM Researcher, Dr. Daniel Oehme, said plant walls are the first barrier to disease pathogens.

“While we don’t fully understand the molecular pathway of pathogen infection and plant r

You can find out more about this work and affiliated projects at the Australian Research Centre (ARC) of Excellence in Plant Cell Walls.

Water’s liquid-vapour interface

The UK’s National Physical Laboratory (NPL), along with IBM and the University of Edinburgh, has developed a new quantum model for understanding water’s liquid-vapour interface according to an April 20, 2015 news item on Nanowerk,

The National Physical Laboratory (NPL), the UK’s National Measurement Institute in collaboration with IBM and the University of Edinburgh, has used a new quantum model to reveal the molecular structure of water’s liquid surface.

The liquid-vapour interface of water is one of the most common of all heterogeneous (or non-uniform) environments. Understanding its molecular structure will provide insight into complex biochemical interactions underpinning many biological processes. But experimental measurements of the molecular structure of water’s surface are challenging, and currently competing models predict various different arrangements.

An April 20, 2015 NPL press release on EurekAlert, which originated the news item, describes the model and research in more detail,

The model is based on a single charged particle, the quantum Drude oscillator (QDO), which mimics the way the electrons of a real water molecule fluctuate and respond to their environment. This simplified representation retains interactions not normally accessible in classical models and accurately captures the properties of liquid water.

In new research, published in a featured article in the journal Physical Chemistry Chemical Physics, the team used the QDO model to determine the molecular structure of water’s liquid surface. The results provide new insight into the hydrogen-bonding topology at the interface, which is responsible for the unusually high surface tension of water.

This is the first time the QDO model of water has been applied to the liquid-vapour interface. The results enabled the researchers to identify the intrinsic asymmetry of hydrogen bonds as the mechanism responsible for the surface’s molecular orientation. The model was also capable of predicting the temperature dependence of the surface tension with remarkable accuracy – to within 1 % of experimental values.

Coupled with earlier work on bulk water, this result demonstrates the exceptional transferability of the QDO approach and offers a promising new platform for molecular exploration of condensed matter.

Here’s a link to and a citation for the paper,

Hydrogen bonding and molecular orientation at the liquid–vapour interface of water by Flaviu S. Cipcigan, Vlad P. Sokhan, Andrew P. Jones, Jason Crain and Glenn J. Martyna.  Phys. Chem. Chem. Phys., 2015,17, 8660-8669 DOI: 10.1039/C4CP05506C First published online 17 Feb 2015

The paper is open access although you do need to register on the site provided you don’t have some other means of accessing the paper.

University of Toronto, ebola epidemic, and artificial intelligence applied to chemistry

It’s hard to tell much from the Nov. 5, 2014 University of Toronto news release by Michael Kennedy (also on EurekAlert but dated Nov. 10, 2014) about in silico drug testing focused on finding a treatment for ebola,

The University of Toronto, Chematria and IBM are combining forces in a quest to find new treatments for the Ebola virus.

Using a virtual research technology invented by Chematria, a startup housed at U of T’s Impact Centre, the team will use software that learns and thinks like a human chemist to search for new medicines. Running on Canada’s most powerful supercomputer, the effort will simulate and analyze the effectiveness of millions of hypothetical drugs in just a matter of weeks.

“What we are attempting would have been considered science fiction, until now,” says Abraham Heifets (PhD), a U of T graduate and the chief executive officer of Chematria. “We are going to explore the possible effectiveness of millions of drugs, something that used to take decades of physical research and tens of millions of dollars, in mere days with our technology.”

The news release makes it all sound quite exciting,

Chematria’s technology is a virtual drug discovery platform based on the science of deep learning neural networks and has previously been used for research on malaria, multiple sclerosis, C. difficile, and leukemia. [emphases mine]

Much like the software used to design airplanes and computer chips in simulation, this new system can predict the possible effectiveness of new medicines, without costly and time-consuming physical synthesis and testing. [emphasis mine] The system is driven by a virtual brain that teaches itself by “studying” millions of datapoints about how drugs have worked in the past. With this vast knowledge, the software can apply the patterns it has learned to predict the effectiveness of hypothetical drugs, and suggest surprising uses for existing drugs, transforming the way medicines are discovered.

My understanding is that Chematria’s is not the only “virtual drug discovery platform based on the science of deep learning neural networks” as is acknowledged in the next paragraph. In fact, there’s widespread interest in the medical research community as evidenced by such projects as Seurat-1’s NOTOX* and others. Regarding the research on “malaria, multiple sclerosis, C. difficile, and leukemia,” more details would be welcome, e.g., what happened?

A Nov. 4, 2014 article for Mashable by Anita Li does offer a new detail about the technology,

Now, a team of Canadian researchers are hunting for new Ebola treatments, using “groundbreaking” artificial-intelligence technology that they claim can predict the effectiveness of new medicines 150 times faster than current methods.

With the quotes around the word, groundbreaking, Li suggests a little skepticism about the claim.

Here’s more from Li where she seems to have found some company literature,

Chematria describes its technology as a virtual drug-discovery platform that helps pharmaceutical companies “determine which molecules can become medicines.” Here’s how it works, according to the company:

The system is driven by a virtual brain, modeled on the human visual cortex, that teaches itself by “studying” millions of datapoints about how drugs have worked in the past. With this vast knowledge, Chematria’s brain can apply the patterns it perceives, to predict the effectiveness of hypothetical drugs, and suggest surprising uses for existing drugs, transforming the way medicines are discovered.

I was not able to find a Chematria website or anything much more than this brief description on the University of Toronto website (from the Impact Centre’s Current Companies webpage),

Chematria makes software that helps pharmaceutical companies determine which molecules can become medicines. With Chematria’s proprietary approach to molecular docking simulations, pharmaceutical researchers can confidently predict potent molecules for novel biological targets, thereby enabling faster drug development for a fraction of the price of wet-lab experiments.

Chematria’s Ebola project is focused on drugs already available but could be put to a new use (from Li’s article),

In response to the outbreak, Chematria recently launched an Ebola project, using its algorithm to evaluate molecules that have already gone through clinical trials, and have proven to be safe. “That means we can expedite the process of getting the treatment to the people who need it,” Heifets said. “In a pandemic situation, you’re under serious time pressure.”

He cited Aspirin as an example of proven medicine that has more than one purpose: People take it for headaches, but it’s also helpful for heart disease. Similarly, a drug that’s already out there may also hold the cure for Ebola.

I recommend reading Li’s article in its entirety.

The University of Toronto news release provides more detail about the partners involved in this ebola project,

… The unprecedented speed and scale of this investigation is enabled by the unique strengths of the three partners: Chematria is offering the core artificial intelligence technology that performs the drug research, U of T is contributing biological insights about Ebola that the system will use to search for new treatments and IBM is providing access to Canada’s fastest supercomputer, Blue Gene/Q.

“Our team is focusing on the mechanism Ebola uses to latch on to the cells it infects,” said Dr. Jeffrey Lee of the University of Toronto. “If we can interrupt that process with a new drug, it could prevent the virus from replicating, and potentially work against other viruses like Marburg and HIV that use the same mechanism.”

The initiative may also demonstrate an alternative approach to high-speed medical research. While giving drugs to patients will always require thorough clinical testing, zeroing in on the best drug candidates can take years using today’s most common methods. Critics say this slow and prohibitively expensive process is one of the key reasons that finding treatments for rare and emerging diseases is difficult.

“If we can find promising drug candidates for Ebola using computers alone,” said Heifets, “it will be a milestone for how we develop cures.”

I hope this effort along with all the others being made around the world prove helpful with Ebola. it’s good to see research into drugs (chemical formulations) that are familiar to the medical community and can be used for a different purpose than originally intended. Drugs that are ‘repurposed’ should be cheaper than new ones and we already have data about side effects.

As for the “milestone for how we develop cures,” this team’s work along with all the international research on this front and on how we assess toxicity should certainly make that milestone possible.

* Full disclosure: I came across Seurat-1’s NOTOX project when I attended (at Seurat-1’s expense) the 9th World Congress on Alternatives to Animal Testing held in Aug. 2014 in Prague.

TrueNorth, a brain-inspired chip architecture from IBM and Cornell University

As a Canadian, “true north” is invariably followed by “strong and free” while singing our national anthem. For many Canadians it is almost the only phrase that is remembered without hesitation.  Consequently, some of the buzz surrounding the publication of a paper celebrating ‘TrueNorth’, a brain-inspired chip, is a bit disconcerting. Nonetheless, here is the latest IBM (in collaboration with Cornell University) news from an Aug. 8, 2014 news item on Nanowerk,

Scientists from IBM unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW—orders of magnitude less power than a modern microprocessor. A neurosynaptic supercomputer the size of a postage stamp that runs on the energy equivalent of a hearing-aid battery, this technology could transform science, technology, business, government, and society by enabling vision, audition, and multi-sensory applications.

An Aug. 7, 2014 IBM news release, which originated the news item, provides an overview of the multi-year process this breakthrough represents (Note: Links have been removed),

There is a huge disparity between the human brain’s cognitive capability and ultra-low power consumption when compared to today’s computers. To bridge the divide, IBM scientists created something that didn’t previously exist—an entirely new neuroscience-inspired scalable and efficient computer architecture that breaks path with the prevailing von Neumann architecture used almost universally since 1946.

This second generation chip is the culmination of almost a decade of research and development, including the initial single core hardware prototype in 2011 and software ecosystem with a new programming language and chip simulator in 2013.

The new cognitive chip architecture has an on-chip two-dimensional mesh network of 4096 digital, distributed neurosynaptic cores, where each core module integrates memory, computation, and communication, and operates in an event-driven, parallel, and fault-tolerant fashion. To enable system scaling beyond single-chip boundaries, adjacent chips, when tiled, can seamlessly connect to each other—building a foundation for future neurosynaptic supercomputers. To demonstrate scalability, IBM also revealed a 16-chip system with sixteen million programmable neurons and four billion programmable synapses.

“IBM has broken new ground in the field of brain-inspired computers, in terms of a radically new architecture, unprecedented scale, unparalleled power/area/speed efficiency, boundless scalability, and innovative design techniques. We foresee new generations of information technology systems – that complement today’s von Neumann machines – powered by an evolving ecosystem of systems, software, and services,” said Dr. Dharmendra S. Modha, IBM Fellow and IBM Chief Scientist, Brain-Inspired Computing, IBM Research. “These brain-inspired chips could transform mobility, via sensory and intelligent applications that can fit in the palm of your hand but without the need for Wi-Fi. This achievement underscores IBM’s leadership role at pivotal transformational moments in the history of computing via long-term investment in organic innovation.”

The Defense Advanced Research Projects Agency (DARPA) has funded the project since 2008 with approximately $53M via Phase 0, Phase 1, Phase 2, and Phase 3 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program. Current collaborators include Cornell Tech and iniLabs, Ltd.

Building the Chip

The chip was fabricated using Samsung’s 28nm process technology that has a dense on-chip memory and low-leakage transistors.

“It is an astonishing achievement to leverage a process traditionally used for commercially available, low-power mobile devices to deliver a chip that emulates the human brain by processing extreme amounts of sensory information with very little power,” said Shawn Han, vice president of Foundry Marketing, Samsung Electronics. “This is a huge architectural breakthrough that is essential as the industry moves toward the next-generation cloud and big-data processing. It’s a pleasure to be part of technical progress for next-generation through Samsung’s 28nm technology.”

The event-driven circuit elements of the chip used the asynchronous design methodology developed at Cornell Tech [aka Cornell University] and refined with IBM since 2008.

“After years of collaboration with IBM, we are now a step closer to building a computer similar to our brain,” said Professor Rajit Manohar, Cornell Tech.

The combination of cutting-edge process technology, hybrid asynchronous-synchronous design methodology, and new architecture has led to a power density of 20mW/cm2 which is nearly four orders of magnitude less than today’s microprocessors.

Advancing the SyNAPSE Ecosystem

The new chip is a component of a complete end-to-end vertically integrated ecosystem spanning a chip simulator, neuroscience data, supercomputing, neuron specification, programming paradigm, algorithms and applications, and prototype design models. The ecosystem supports all aspects of the programming cycle from design through development, debugging, and deployment.

To bring forth this fundamentally different technological capability to society, IBM has designed a novel teaching curriculum for universities, customers, partners, and IBM employees.

Applications and Vision

This ecosystem signals a shift in moving computation closer to the data, taking in vastly varied kinds of sensory data, analyzing and integrating real-time information in a context-dependent way, and dealing with the ambiguity found in complex, real-world environments.

Looking to the future, IBM is working on integrating multi-sensory neurosynaptic processing into mobile devices constrained by power, volume and speed; integrating novel event-driven sensors with the chip; real-time multimedia cloud services accelerated by neurosynaptic systems; and neurosynaptic supercomputers by tiling multiple chips on a board, creating systems that would eventually scale to one hundred trillion synapses and beyond.

Building on previously demonstrated neurosynaptic cores with on-chip, online learning, IBM envisions building learning systems that adapt in real world settings. While today’s hardware is fabricated using a modern CMOS process, the underlying architecture is poised to exploit advances in future memory, 3D integration, logic, and sensor technologies to deliver even lower power, denser package, and faster speed.

I have two articles that may prove of interest, Peter Stratton’s Aug. 7, 2014 article for The Conversation provides an easy-to-read introduction to both brains, human and computer, (as they apply to this research) and TrueNorth (h/t phys.org also hosts Stratton’s article). There’s also an Aug. 7, 2014 article by Rob Farber for techenablement.com which includes information from a range of text and video sources about TrueNorth and cognitive computing as it’s also known (well worth checking out).

Here’s a link to and a citation for the paper,

A million spiking-neuron integrated circuit with a scalable communication network and interface by Paul A. Merolla, John V. Arthur, Rodrigo Alvarez-Icaza, Andrew S. Cassidy, Jun Sawada, Filipp Akopyan, Bryan L. Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, Bernard Brezzo, Ivan Vo, Steven K. Esser, Rathinakumar Appuswamy, Brian Taba, Arnon Amir, Myron D. Flickner, William P. Risk, Rajit Manohar, and Dharmendra S. Modha. Science 8 August 2014: Vol. 345 no. 6197 pp. 668-673 DOI: 10.1126/science.1254642

This paper is behind a paywall.

IBM weighs in with plans for a 7nm computer chip

On the heels of Intel’s announcement about a deal utilizing their 14nm low-power manufacturing process and speculations about a 10nm computer chip (my July 9, 2014 posting), IBM makes an announcement about a 7nm chip as per this July 10, 2014 news item on Azonano,

IBM today [July 10, 2014] announced it is investing $3 billion over the next 5 years in two broad research and early stage development programs to push the limits of chip technology needed to meet the emerging demands of cloud computing and Big Data systems. These investments will push IBM’s semiconductor innovations from today’s breakthroughs into the advanced technology leadership required for the future.

A very comprehensive July 10, 2014 news release lays out the company’s plans for this $3B investment representing 10% of IBM’s total research budget,

The first research program is aimed at so-called “7 nanometer and beyond” silicon technology that will address serious physical challenges that are threatening current semiconductor scaling techniques and will impede the ability to manufacture such chips. The second is focused on developing alternative technologies for post-silicon era chips using entirely different approaches, which IBM scientists and other experts say are required because of the physical limitations of silicon based semiconductors.

Cloud and big data applications are placing new challenges on systems, just as the underlying chip technology is facing numerous significant physical scaling limits.  Bandwidth to memory, high speed communication and device power consumption are becoming increasingly challenging and critical.

The teams will comprise IBM Research scientists and engineers from Albany and Yorktown, New York; Almaden, California; and Europe. In particular, IBM will be investing significantly in emerging areas of research that are already underway at IBM such as carbon nanoelectronics, silicon photonics, new memory technologies, and architectures that support quantum and cognitive computing. [emphasis mine]

These teams will focus on providing orders of magnitude improvement in system level performance and energy efficient computing. In addition, IBM will continue to invest in the nanosciences and quantum computing–two areas of fundamental science where IBM has remained a pioneer for over three decades.

7 nanometer technology and beyond

IBM Researchers and other semiconductor experts predict that while challenging, semiconductors show promise to scale from today’s 22 nanometers down to 14 and then 10 nanometers in the next several years.  However, scaling to 7 nanometers and perhaps below, by the end of the decade will require significant investment and innovation in semiconductor architectures as well as invention of new tools and techniques for manufacturing.

“The question is not if we will introduce 7 nanometer technology into manufacturing, but rather how, when, and at what cost?” said John Kelly, senior vice president, IBM Research. “IBM engineers and scientists, along with our partners, are well suited for this challenge and are already working on the materials science and device engineering required to meet the demands of the emerging system requirements for cloud, big data, and cognitive systems. This new investment will ensure that we produce the necessary innovations to meet these challenges.”

“Scaling to 7nm and below is a terrific challenge, calling for deep physics competencies in processing nano materials affinities and characteristics. IBM is one of a very few companies who has repeatedly demonstrated this level of science and engineering expertise,” said Richard Doherty, technology research director, The Envisioneering Group.

Bridge to a “Post-Silicon” Era

Silicon transistors, tiny switches that carry information on a chip, have been made smaller year after year, but they are approaching a point of physical limitation. Their increasingly small dimensions, now reaching the nanoscale, will prohibit any gains in performance due to the nature of silicon and the laws of physics. Within a few more generations, classical scaling and shrinkage will no longer yield the sizable benefits of lower power, lower cost and higher speed processors that the industry has become accustomed to.

With virtually all electronic equipment today built on complementary metal–oxide–semiconductor (CMOS) technology, there is an urgent need for new materials and circuit architecture designs compatible with this engineering process as the technology industry nears physical scalability limits of the silicon transistor.

Beyond 7 nanometers, the challenges dramatically increase, requiring a new kind of material to power systems of the future, and new computing platforms to solve problems that are unsolvable or difficult to solve today. Potential alternatives include new materials such as carbon nanotubes, and non-traditional computational approaches such as neuromorphic computing, cognitive computing, machine learning techniques, and the science behind quantum computing.

As the leader in advanced schemes that point beyond traditional silicon-based computing, IBM holds over 500 patents for technologies that will drive advancements at 7nm and beyond silicon — more than twice the nearest competitor. These continued investments will accelerate the invention and introduction into product development for IBM’s highly differentiated computing systems for cloud, and big data analytics.

Several exploratory research breakthroughs that could lead to major advancements in delivering dramatically smaller, faster and more powerful computer chips, include quantum computing, neurosynaptic computing, silicon photonics, carbon nanotubes, III-V technologies, low power transistors and graphene:

Quantum Computing

The most basic piece of information that a typical computer understands is a bit. Much like a light that can be switched on or off, a bit can have only one of two values: “1” or “0.” Described as superposition, this special property of qubits enables quantum computers to weed through millions of solutions all at once, while desktop PCs would have to consider them one at a time.

IBM is a world leader in superconducting qubit-based quantum computing science and is a pioneer in the field of experimental and theoretical quantum information, fields that are still in the category of fundamental science – but one that, in the long term, may allow the solution of problems that are today either impossible or impractical to solve using conventional machines. The team recently demonstrated the first experimental realization of parity check with three superconducting qubits, an essential building block for one type of quantum computer.

Neurosynaptic Computing

Bringing together nanoscience, neuroscience, and supercomputing, IBM and university partners have developed an end-to-end ecosystem including a novel non-von Neumann architecture, a new programming language, as well as applications. This novel technology allows for computing systems that emulate the brain’s computing efficiency, size and power usage. IBM’s long-term goal is to build a neurosynaptic system with ten billion neurons and a hundred trillion synapses, all while consuming only one kilowatt of power and occupying less than two liters of volume.

Silicon Photonics

IBM has been a pioneer in the area of CMOS integrated silicon photonics for over 12 years, a technology that integrates functions for optical communications on a silicon chip, and the IBM team has recently designed and fabricated the world’s first monolithic silicon photonics based transceiver with wavelength division multiplexing.  Such transceivers will use light to transmit data between different components in a computing system at high data rates, low cost, and in an energetically efficient manner.

Silicon nanophotonics takes advantage of pulses of light for communication rather than traditional copper wiring and provides a super highway for large volumes of data to move at rapid speeds between computer chips in servers, large datacenters, and supercomputers, thus alleviating the limitations of congested data traffic and high-cost traditional interconnects.

Businesses are entering a new era of computing that requires systems to process and analyze, in real-time, huge volumes of information known as Big Data. Silicon nanophotonics technology provides answers to Big Data challenges by seamlessly connecting various parts of large systems, whether few centimeters or few kilometers apart from each other, and move terabytes of data via pulses of light through optical fibers.

III-V technologies

IBM researchers have demonstrated the world’s highest transconductance on a self-aligned III-V channel metal-oxide semiconductor (MOS) field-effect transistors (FETs) device structure that is compatible with CMOS scaling. These materials and structural innovation are expected to pave path for technology scaling at 7nm and beyond.  With more than an order of magnitude higher electron mobility than silicon, integrating III-V materials into CMOS enables higher performance at lower power density, allowing for an extension to power/performance scaling to meet the demands of cloud computing and big data systems.

Carbon Nanotubes

IBM Researchers are working in the area of carbon nanotube (CNT) electronics and exploring whether CNTs can replace silicon beyond the 7 nm node.  As part of its activities for developing carbon nanotube based CMOS VLSI circuits, IBM recently demonstrated — for the first time in the world — 2-way CMOS NAND gates using 50 nm gate length carbon nanotube transistors.

IBM also has demonstrated the capability for purifying carbon nanotubes to 99.99 percent, the highest (verified) purities demonstrated to date, and transistors at 10 nm channel length that show no degradation due to scaling–this is unmatched by any other material system to date.

Carbon nanotubes are single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device that will work in a fashion similar to the current silicon transistor, but will be better performing. They could be used to replace the transistors in chips that power data-crunching servers, high performing computers and ultra fast smart phones.

Carbon nanotube transistors can operate as excellent switches at molecular dimensions of less than ten nanometers – the equivalent to 10,000 times thinner than a strand of human hair and less than half the size of the leading silicon technology. Comprehensive modeling of the electronic circuits suggests that about a five to ten times improvement in performance compared to silicon circuits is possible.

Graphene

Graphene is pure carbon in the form of a one atomic layer thick sheet.  It is an excellent conductor of heat and electricity, and it is also remarkably strong and flexible.  Electrons can move in graphene about ten times faster than in commonly used semiconductor materials such as silicon and silicon germanium. Its characteristics offer the possibility to build faster switching transistors than are possible with conventional semiconductors, particularly for applications in the handheld wireless communications business where it will be a more efficient switch than those currently used.

Recently in 2013, IBM demonstrated the world’s first graphene based integrated circuit receiver front end for wireless communications. The circuit consisted of a 2-stage amplifier and a down converter operating at 4.3 GHz.

Next Generation Low Power Transistors

In addition to new materials like CNTs, new architectures and innovative device concepts are required to boost future system performance. Power dissipation is a fundamental challenge for nanoelectronic circuits. To explain the challenge, consider a leaky water faucet — even after closing the valve as far as possible water continues to drip — this is similar to today’s transistor, in that energy is constantly “leaking” or being lost or wasted in the off-state.

A potential alternative to today’s power hungry silicon field effect transistors are so-called steep slope devices. They could operate at much lower voltage and thus dissipate significantly less power. IBM scientists are researching tunnel field effect transistors (TFETs). In this special type of transistors the quantum-mechanical effect of band-to-band tunneling is used to drive the current flow through the transistor. TFETs could achieve a 100-fold power reduction over complementary CMOS transistors, so integrating TFETs with CMOS technology could improve low-power integrated circuits.

Recently, IBM has developed a novel method to integrate III-V nanowires and heterostructures directly on standard silicon substrates and built the first ever InAs/Si tunnel diodes and TFETs using InAs as source and Si as channel with wrap-around gate as steep slope device for low power consumption applications.

“In the next ten years computing hardware systems will be fundamentally different as our scientists and engineers push the limits of semiconductor innovations to explore the post-silicon future,” said Tom Rosamilia, senior vice president, IBM Systems and Technology Group. “IBM Research and Development teams are creating breakthrough innovations that will fuel the next era of computing systems.”

IBM’s historic contributions to silicon and semiconductor innovation include the invention and/or first implementation of: the single cell DRAM, the “Dennard scaling laws” underpinning “Moore’s Law”, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator, strained engineering, multi core microprocessors, immersion lithography, high speed silicon germanium (SiGe), High-k gate dielectrics, embedded DRAM, 3D chip stacking, and Air gap insulators.

IBM researchers also are credited with initiating the era of nano devices following the Nobel prize winning invention of the scanning tunneling microscope which enabled nano and atomic scale invention and innovation.

IBM will also continue to fund and collaborate with university researchers to explore and develop the future technologies for the semiconductor industry. In particular, IBM will continue to support and fund university research through private-public partnerships such as the NanoElectornics Research Initiative (NRI), and the Semiconductor Advanced Research Network (STARnet), and the Global Research Consortium (GRC) of the Semiconductor Research Corporation.

I highlighted ‘memory systems’ as this brings to mind HP Labs and their major investment in ‘memristive’ technologies noted in my June 26, 2014 posting,

… During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg {Meg Whitman, CEO of HP Labs] turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. “People in Labs see this as a once-in-a-lifetime opportunity.”

The Machine is based on the memristor and other associated technologies.

Getting back to IBM, there’s this analysis of the $3B investment ($600M/year for five years) by Alex Konrad in a July 10, 2014 article for Forbes (Note: A link has been removed),

When IBM … announced a $3 billion commitment to even tinier semiconductor chips that no longer depended on silicon on Wednesday, the big news was that IBM’s putting a lot of money into a future for chips where Moore’s Law no longer applies. But on second glance, the move to spend billions on more experimental ideas like silicon photonics and carbon nanotubes shows that IBM’s finally shifting large portions of its research budget into more ambitious and long-term ideas.

… IBM tells Forbes the $3 billion isn’t additional money being added to its R&D spend, an area where analysts have told Forbes they’d like to see more aggressive cash commitments in the future. IBM will still spend about $6 billion a year on R&D, 6% of revenue. Ten percent of that research budget, however, now has to come from somewhere else to fuel these more ambitious chip projects.

Neal Ungerleider’s July 11, 2014 article for Fast Company focuses on the neuromorphic computing and quantum computing aspects of this $3B initiative (Note: Links have been removed),

The new R&D initiatives fall into two categories: Developing nanotech components for silicon chips for big data and cloud systems, and experimentation with “post-silicon” microchips. This will include research into quantum computers which don’t know binary code, neurosynaptic computers which mimic the behavior of living brains, carbon nanotubes, graphene tools and a variety of other technologies.

IBM’s investment is one of the largest for quantum computing to date; the company is one of the biggest researchers in the field, along with a Canadian company named D-Wave which is partnering with Google and NASA to develop quantum computer systems.

The curious can find D-Wave Systems here. There’s also a January 19, 2012 posting here which discusses the D-Wave’s situation at that time.

Final observation, these are fascinating developments especially for the insight they provide into the worries troubling HP Labs, Intel, and IBM as they jockey for position.

ETA July 14, 2014: Dexter Johnson has a July 11, 2014 posting on his Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers]) about the IBM announcement and which features some responses he received from IBM officials to his queries,

While this may be a matter of fascinating speculation for investors, the impact on nanotechnology development  is going to be significant. To get a better sense of what it all means, I was able to talk to some of the key figures of IBM’s push in nanotechnology research.

I conducted e-mail interviews with Tze-Chiang (T.C.) Chen, vice president science & technology, IBM Fellow at the Thomas J. Watson Research Center and Wilfried Haensch, senior manager, physics and materials for logic and communications, IBM Research.

Silicon versus Nanomaterials

First, I wanted to get a sense for how long IBM envisioned sticking with silicon and when they expected the company would permanently make the move away from CMOS to alternative nanomaterials. Unfortunately, as expected, I didn’t get solid answers, except for them to say that new manufacturing tools and techniques need to be developed now.

He goes on to ask about carbon nanotubes and graphene. Interestingly, IBM does not have a wide range of electronics applications in mind for graphene.  I encourage you to read Dexter’s posting as Dexter got answers to some very astute and pointed questions.

Memristor, memristor! What is happening? News from the University of Michigan and HP Laboratories

Professor Wei Lu (whose work on memristors has been mentioned here a few times [an April 15, 2010 posting and an April 19, 2012 posting]) has made a discovery about memristors with significant implications (from a June 25, 2014 news item on Azonano),

In work that unmasks some of the magic behind memristors and “resistive random access memory,” or RRAM—cutting-edge computer components that combine logic and memory functions—researchers have shown that the metal particles in memristors don’t stay put as previously thought.

The findings have broad implications for the semiconductor industry and beyond. They show, for the first time, exactly how some memristors remember.

A June 24, 2014 University of Michigan news release, which originated the news item, includes Lu’s perspective on this discovery and more details about it,

“Most people have thought you can’t move metal particles in a solid material,” said Wei Lu, associate professor of electrical and computer engineering at the University of Michigan. “In a liquid and gas, it’s mobile and people understand that, but in a solid we don’t expect this behavior. This is the first time it has been shown.”

Lu, who led the project, and colleagues at U-M and the Electronic Research Centre Jülich in Germany used transmission electron microscopes to watch and record what happens to the atoms in the metal layer of their memristor when they exposed it to an electric field. The metal layer was encased in the dielectric material silicon dioxide, which is commonly used in the semiconductor industry to help route electricity.

They observed the metal atoms becoming charged ions, clustering with up to thousands of others into metal nanoparticles, and then migrating and forming a bridge between the electrodes at the opposite ends of the dielectric material.

They demonstrated this process with several metals, including silver and platinum. And depending on the materials involved and the electric current, the bridge formed in different ways.

The bridge, also called a conducting filament, stays put after the electrical power is turned off in the device. So when researchers turn the power back on, the bridge is there as a smooth pathway for current to travel along. Further, the electric field can be used to change the shape and size of the filament, or break the filament altogether, which in turn regulates the resistance of the device, or how easy current can flow through it.

Computers built with memristors would encode information in these different resistance values, which is in turn based on a different arrangement of conducting filaments.

Memristor researchers like Lu and his colleagues had theorized that the metal atoms in memristors moved, but previous results had yielded different shaped filaments and so they thought they hadn’t nailed down the underlying process.

“We succeeded in resolving the puzzle of apparently contradicting observations and in offering a predictive model accounting for materials and conditions,” said Ilia Valov, principle investigator at the Electronic Materials Research Centre Jülich. “Also the fact that we observed particle movement driven by electrochemical forces within dielectric matrix is in itself a sensation.”

The implications for this work (from the news release),

The results could lead to a new approach to chip design—one that involves using fine-tuned electrical signals to lay out integrated circuits after they’re fabricated. And it could also advance memristor technology, which promises smaller, faster, cheaper chips and computers inspired by biological brains in that they could perform many tasks at the same time.

As is becoming more common these days (from the news release),

Lu is a co-founder of Crossbar Inc., a Santa Clara, Calif.-based startup working to commercialize RRAM. Crossbar has just completed a $25 million Series C funding round.

Here’s a link to and a citation for the paper,

Electrochemical dynamics of nanoscale metallic inclusions in dielectrics by Yuchao Yang, Peng Gao, Linze Li, Xiaoqing Pan, Stefan Tappertzhofen, ShinHyun Choi, Rainer Waser, Ilia Valov, & Wei D. Lu. Nature Communications 5, Article number: 4232 doi:10.1038/ncomms5232 Published 23 June 2014

This paper is behind a paywall.

The other party instrumental in the development and, they hope, the commercialization of memristors is HP (Hewlett Packard) Laboratories (HP Labs). Anyone familiar with this blog will likely know I have frequently covered the topic starting with an essay explaining the basics on my Nanotech Mysteries wiki (or you can check this more extensive and more recently updated entry on Wikipedia) and with subsequent entries here over the years. The most recent entry is a Jan. 9, 2014 posting which featured the then latest information on the HP Labs memristor situation (scroll down about 50% of the way). This new information is more in the nature of a new revelation of details rather than an update on its status. Sebastian Anthony’s June 11, 2014 article for extremetech.com lays out the situation plainly (Note: Links have been removed),

HP, one of the original 800lb Silicon Valley gorillas that has seen much happier days, is staking everything on a brand new computer architecture that it calls… The Machine. Judging by an early report from Bloomberg Businessweek, up to 75% of HP’s once fairly illustrious R&D division — HP Labs – are working on The Machine. As you would expect, details of what will actually make The Machine a unique proposition are hard to come by, but it sounds like HP’s groundbreaking work on memristors (pictured top) and silicon photonics will play a key role.

First things first, we’re probably not talking about a consumer computing architecture here, though it’s possible that technologies commercialized by The Machine will percolate down to desktops and laptops. Basically, HP used to be a huge player in the workstation and server markets, with its own operating system and hardware architecture, much like Sun. Over the last 10 years though, Intel’s x86 architecture has rapidly taken over, to the point where HP (and Dell and IBM) are essentially just OEM resellers of commodity x86 servers. This has driven down enterprise profit margins — and when combined with its huge stake in the diminishing PC market, you can see why HP is rather nervous about the future. The Machine, and IBM’s OpenPower initiative, are both attempts to get out from underneath Intel’s x86 monopoly.

While exact details are hard to come by, it seems The Machine is predicated on the idea that current RAM, storage, and interconnect technology can’t keep up with modern Big Data processing requirements. HP is working on two technologies that could solve both problems: Memristors could replace RAM and long-term flash storage, and silicon photonics could provide faster on- and off-motherboard buses. Memristors essentially combine the benefits of DRAM and flash storage in a single, hyper-fast, super-dense package. Silicon photonics is all about reducing optical transmission and reception to a scale that can be integrated into silicon chips (moving from electrical to optical would allow for much higher data rates and lower power consumption). Both technologies can be built using conventional fabrication techniques.

In a June 11, 2014 article by Ashlee Vance for Bloomberg Business Newsweek, the company’s CTO (Chief Technical Officer), Martin Fink provides new details,

That’s what they’re calling it at HP Labs: “the Machine.” It’s basically a brand-new type of computer architecture that HP’s engineers say will serve as a replacement for today’s designs, with a new operating system, a different type of memory, and superfast data transfer. The company says it will bring the Machine to market within the next few years or fall on its face trying. “We think we have no choice,” says Martin Fink, the chief technology officer and head of HP Labs, who is expected to unveil HP’s plans at a conference Wednesday [June 11, 2014].

In my Jan. 9, 2014 posting there’s a quote from Martin Fink stating that 2018 would be earliest date for the company’s StoreServ arrays to be packed with 100TB Memristor drives (the Machine?). The company later clarified the comment by noting that it’s very difficult to set dates for new technology arrivals.

Vance shares what could be a stirring ‘origins’ story of sorts, provided the Machine is successful,

The Machine started to take shape two years ago, after Fink was named director of HP Labs. Assessing the company’s projects, he says, made it clear that HP was developing the needed components to create a better computing system. Among its research projects: a new form of memory known as memristors; and silicon photonics, the transfer of data inside a computer using light instead of copper wires. And its researchers have worked on operating systems including Windows, Linux, HP-UX, Tru64, and NonStop.

Fink and his colleagues decided to pitch HP Chief Executive Officer Meg Whitman on the idea of assembling all this technology to form the Machine. During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. “People in Labs see this as a once-in-a-lifetime opportunity.”

Here is the memristor making an appearance in Vance’s article,

HP’s bet is the memristor, a nanoscale chip that Labs researchers must build and handle in full anticontamination clean-room suits. At the simplest level, the memristor consists of a grid of wires with a stack of thin layers of materials such as tantalum oxide at each intersection. When a current is applied to the wires, the materials’ resistance is altered, and this state can hold after the current is removed. At that point, the device is essentially remembering 1s or 0s depending on which state it is in, multiplying its storage capacity. HP can build these chips with traditional semiconductor equipment and expects to be able to pack unprecedented amounts of memory—enough to store huge databases of pictures, files, and data—into a computer.

New memory and networking technology requires a new operating system. Most applications written in the past 50 years have been taught to wait for data, assuming that the memory systems feeding the main computers chips are slow. Fink has assigned one team to develop the open-source Machine OS, which will assume the availability of a high-speed, constant memory store. …

Peter Bright in his June 11, 2014 article for Ars Technica opens his article with a controversial statement (Note: Links have been removed),

In 2008, scientists at HP invented a fourth fundamental component to join the resistor, capacitor, and inductor: the memristor. [emphasis mine] Theorized back in 1971, memristors showed promise in computing as they can be used to both build logic gates, the building blocks of processors, and also act as long-term storage.

Whether or not the memristor is a fourth fundamental component has been a matter of some debate as you can see in this Memristor entry (section on Memristor definition and criticism) on Wikipedia.

Bright goes on to provide a 2016 delivery date for some type of memristor-based product and additional technical insight about the Machine,

… By 2016, the company plans to have memristor-based DIMMs, which will combine the high storage densities of hard disks with the high performance of traditional DRAM.

John Sontag, vice president of HP Systems Research, said that The Machine would use “electrons for processing, photons for communication, and ions for storage.” The electrons are found in conventional silicon processors, and the ions are found in the memristors. The photons are because the company wants to use optical interconnects in the system, built using silicon photonics technology. With silicon photonics, photons are generated on, and travel through, “circuits” etched onto silicon chips, enabling conventional chip manufacturing to construct optical parts. This allows the parts of the system using photons to be tightly integrated with the parts using electrons.

The memristor story has proved to be even more fascinating than I thought in 2008 and I was already as fascinated as could be, or so I thought.

Fungal infections, begone!

A May 7, 2014 news item on Nanowerk highlights some antifungal research at A*STAR (Singapore’s Agency for Science, Technology and Research),

Pathogenic fungi like Candida albicans can cause oral, skin, nail and genital infections. While exposure to pathogenic fungi is generally not life-threatening, it can be deadly to immunocompromised patients with AIDS or cancer. A variety of antifungal medications, such as triazoles and polyenes, are currently used for treating fungal infections. The range of these antifungal medications, however, is extremely limited, with some fungal species developing resistance to these drugs.

Yi Yan Yang at the A*STAR Institute of Bioengineering and Nanotechnology in Singapore and co-workers, in collaboration with IBM Almaden Research Center in the United States, have discovered four cationic terephthalamide-bisurea compounds with strong antifungal activity, excellent microbial selectivity and low host toxicity …

A May 7, 2018 A*STAR news release, which originated the news item, describes the research in detail (Note: A link has been removed),

Conformational analysis revealed that the terephthalamide-bisurea compounds have a Z-shaped structure: the terephthalamide sits in the middle, urea groups on both sides of the terephthalamide, and cationic charges at both ends. The researchers prepared compounds with different spacers — ethyl, butyl, hexyl or benzyl amine — in-between the urea group and the cationic charge.

When dissolved in water, the terephthalamide-bisurea compounds aggregate to form fibers with lengths ranging from a few hundred nanometers to several micrometers. Some of the compounds form fibers with high flexibility and others with high rigidity.

The researchers evaluated the antifungal activity of their terephthalamide-bisurea compounds against C. albicans. They found that all of the cationic compounds effectively inhibited fungal growth, even when the fungal concentration increased from 102 to 105 colony-forming units per milliliter.

The researchers believe that the potent antifungal activity is largely due to the formation of fibers with extremely small diameters in the order of 5 to 10 nanometers, which facilitates the rupture of fungal membranes. “This is particularly important because the fungal membrane of C. albicans is multilayered and has low negative charges,” explains Yang. “It also helps explain why cationic terephthalamide-bisurea compounds could easily penetrate the fungal membrane.”

The terephthalamide-bisurea compounds also eradicated clinically isolated drug-resistant C. albicans. The compounds prevent the development of drug resistance by rupturing the fungal membrane of C. albicans and disrupting the biofilm (see image).

Additionally, cytotoxicity tests showed that the cationic terephthalamide-bisurea compounds exhibit low toxicity toward mammalian cells and in a mouse model, revealing that the compounds “are relatively safe for preventing and treating fungal infections,” says Yang. [emphasis mine]

It’s nice to see that this potential anti-fungal treatment isn’t damaging to one’s cells.

Here’s a link to and a citation for the paper,

Supramolecular high-aspect ratio assemblies with strong antifungal activity by Kazuki Fukushima, Shaoqiong Liu, Hong Wu, Amanda C. Engler, Daniel J. Coady, Hareem Maune, Jed Pitera, Alshakim Nelson, Nikken Wiradharma, Shrinivas Venkataraman, Yuan Huang, Weimin Fan, Jackie Y. Ying, Yi Yan Yang, & James L. Hedrick. Nature Communications 4, Article number: 286 doi:10.1038/ncomms3861 Published 09 December 2013

This article is behind a paywall.