Tag Archives: Dr. Leon Chua

Memristor update

HP Labs is making memristor news again. From a news item on physorg.ocm,

HP is partnering with Korean memory chip maker Hynix Semiconductor Inc. to make chips that contain memristors. Memristors are a newly discovered building block of electrical circuits.

HP built one in 2008 that confirmed what scientists had suspected for nearly 40 years but hadn’t been able to prove: that circuits have a weird, natural ability to remember things even when they’re turned off.

I don’t remember the story quite that way, i.e.,  “confirmed what scientists had suspected for nearly 40 years” as I recall the theory that R. Stanley William (the HP Labs team leader) cites  is from Dr. Leon Chua circa 1971 and was almost forgotten. (Unbeknownst to Dr. Chua, there was a previous theorist in the 1960s who posited a similar notion which he called a memistor. See Memistors, Memristors, and the Rise of Strong Artificial Intelligence, an article by Blaise Mouttet, for a more complete history. ETA: There’s additional material from Blaise at http://www.neurdon.com/)

There’s more about HP Labs and its new partner at BBC News in an article by Jason Palmer,

Electronics giant HP has joined the world’s second-largest memory chip maker Hynix to manufacture a novel member of the electronics family.

The deal will see “memristors” – first demonstrated by HP in 2006 [I believe it was 2008] – mass produced for the first time.

Memristors promise significantly greater memory storage requiring less energy and space, and may eventually also be employed in processors.

HP says the first memristors should be widely available in about three years.

If you follow the link to the story, there’s also a brief BBC video interview with Stanley Williams.

My first 2010 story on the memristor is here and later, there’s an interview I had with Forrest H Bennet III who argues that the memristor is not a fourth element (in addition to the capacitor, resistor, and inductor) but is in fact part of an infinite table of circuit elements.

ETA: I have some additional information from the news release on the HP Labs website,

HP today announced that it has entered into a joint development agreement with Hynix Semiconductor Inc., a world leader in the manufacture of computer memory, to bring memristor technology to market.

Memristors represent a fourth basic passive circuit element. They existed only in theory until 2006 – when researchers in HP Labs’ Information and Quantum Systems Laboratory (IQSL) first intentionally demonstrated their existence.

Memory chips created with memristor technology have the potential to run considerably faster and use much less energy than Flash memory technologies, says Dr. Stanley Williams, HP Senior Fellow and IQSL founding Director.

“We believe that the memristor is a universal memory that over time could replace Flash, DRAM, and even hard drives,” he says.

Uniting HP’s world-class research and IP with a first-rate memory manufacturer will allow high-quality, memristor-based memory to be developed quickly and on a mass scale, Williams adds.

Also, the video interview with Dr. Williams is on youtube and is not a BBC video as I believed. So here’s the interview,

Measuring professional and national scientific achievements; Canadian science policy conferences

I’m going to start with an excellent study about publication bias in science papers and careerism that I stumbled across this morning on physorg.com (from the news item),

Dr [Daniele] Fanelli [University of Edinburgh] analysed over 1300 papers that declared to have tested a hypothesis in all disciplines, from physics to sociology, the principal author of which was based in a U.S. state. Using data from the National Science Foundation, he then verified whether the papers’ conclusions were linked to the states’ productivity, measured by the number of papers published on average by each academic.

Findings show that papers whose authors were based in more “productive” states were more likely to support the tested hypothesis, independent of discipline and funding availability. This suggests that scientists working in more competitive and productive environments are more likely to make their results look “positive”. It remains to be established whether they do this by simply writing the papers differently or by tweaking and selecting their data.

I was happy to find out that Fanelli’s paper has been published by the PLoS [Public Library of Science] ONE , an open access journal. From the paper [numbers in square brackets are citations found at the end of the published paper],

Quantitative studies have repeatedly shown that financial interests can influence the outcome of biomedical research [27], [28] but they appear to have neglected the much more widespread conflict of interest created by scientists’ need to publish. Yet, fears that the professionalization of research might compromise its objectivity and integrity had been expressed already in the 19th century [29]. Since then, the competitiveness and precariousness of scientific careers have increased [30], and evidence that this might encourage misconduct has accumulated. Scientists in focus groups suggested that the need to compete in academia is a threat to scientific integrity [1], and those guilty of scientific misconduct often invoke excessive pressures to produce as a partial justification for their actions [31]. Surveys suggest that competitive research environments decrease the likelihood to follow scientific ideals [32] and increase the likelihood to witness scientific misconduct [33] (but see [34]). However, no direct, quantitative study has verified the connection between pressures to publish and bias in the scientific literature, so the existence and gravity of the problem are still a matter of speculation and debate [35].

Fanelli goes on to describe his research methods and how he came to his conclusion that the pressure to publish may have a significant impact on ‘scientific objectivity’.

This paper provides an interesting counterpoint to a discussion about science metrics or bibliometrics taking place on (the journal) Nature’s website here. It was stimulated by Judith Lane’s recent article titled, Let’s Make Science Metrics More Scientific. The article is open access and comments are invited. From the article [numbers in square brackets refer to citations found at the end of the article],

Measuring and assessing academic performance is now a fact of scientific life. Decisions ranging from tenure to the ranking and funding of universities depend on metrics. Yet current systems of measurement are inadequate. Widely used metrics, from the newly-fashionable Hirsch index to the 50-year-old citation index, are of limited use [1]. Their well-known flaws include favouring older researchers, capturing few aspects of scientists’ jobs and lumping together verified and discredited science. Many funding agencies use these metrics to evaluate institutional performance, compounding the problems [2]. Existing metrics do not capture the full range of activities that support and transmit scientific ideas, which can be as varied as mentoring, blogging or creating industrial prototypes.

The range of comments is quite interesting, I was particularly taken by something Martin Fenner said,

Science metrics are not only important for evaluating scientific output, they are also great discovery tools, and this may indeed be their more important use. Traditional ways of discovering science (e.g. keyword searches in bibliographic databases) are increasingly superseded by non-traditional approaches that use social networking tools for awareness, evaluations and popularity measurements of research findings.

(Fenner’s blog along with more of his comments about science metrics can be found here. If this link doesn’t work, you can get to Fenner’s blog by going to Lane’s Nature article and finding him in the comments section.)

There are a number of issues here: how do we measure science work (citations in other papers?) as well as how do we define the impact of science work (do we use social networks?) which brings the question to: how do we measure the impact when we’re talking about a social network?

Now, I’m going to add timeline as an issue. Over what period of time are we measuring the impact? I ask the question because of the memristor story.  Dr. Leon Chua wrote a paper in 1971 that, apparently, didn’t receive all that much attention at the time but was cited in a 2008 paper which received widespread attention. Meanwhile, Chua had continued to theorize about memristors in a 2003 paper that received so little attention that Chua abandoned plans to write part 2. Since the recent burst of renewed interest in the memristor and his 2003 paper, Chua has decided to follow up with part 2, hopefully some time in 2011. (as per this April 13, 2010 posting) There’s one more piece to the puzzle: an earlier paper by F. Argall. From Blaise Mouttet’s April 5, 2010 comment here on this blog,

In addition HP’s papers have ignored some basic research in TiO2 multi-state resistance switching from the 1960’s which disclose identical results. See F. Argall, “Switching Phenomena in Titanium Oxide thin Films,” Solid State Electronics, 1968.

[ETA: April 22, 2010 Blaise Mouttet has provided a link to an article  which provides more historical insight into the memristor story. http://knol.google.com/k/memistors-memristors-and-the-rise-of-strong-artificial-intelligence#

How do you measure or even track  all of that? Shy of some science writer taking the time to pursue the story and write a nonfiction book about it.

I’m not counselling that the process be abandoned but since it seems that the people are revisiting the issues, it’s an opportune time to get all the questions on the table.

As for its importance, this process of trying to establish better and new science metrics may seem irrelevant to most people but it has a much larger impact than even the participants appear to realize. Governments measure their scientific progress by touting the number of papers their scientists have produced amongst other measures such as  patents. Measuring the number of published papers has an impact on how governments want to be perceived internationally and within their own borders. Take for example something which has both international and national impact, the recent US National Nanotechnology Initiative (NNI) report to the President’s Council of Science and Technology Advisors (PCAST). The NNI used the number of papers published as a way of measuring the US’s possibly eroding leadership in the field. (China published about 5000 while the US published about 3000.)

I don’t have much more to say other than I hope to see some new metrics.

Canadian science policy conferences

We have two such conferences and both are two years old in 2010. The first one is being held in Gatineau, Québec, May 12 – 14, 2010. Called Public Science  in Canada: Strengthening Science and Policy to Protect Canadians [ed. note: protecting us from what?], the target audience for the conference seems to be government employees. David Suzuki (tv host, scientist, evironmentalist, author, etc.) and Preston Manning (ex-politico) will be co-presenting a keynote address titled: Speaking Science to Power.

The second conference takes place in Montréal, Québec, Oct. 20-22, 2010. It’s being produced by the Canadian Science Policy Centre. Other than a notice on the home page, there’s not much information about their upcoming conference yet.

I did note that Adam Holbrook (aka J. Adam Holbrook) is both speaking at the May conference and is an advisory committee member for the folks who are organizing the October conference. At the May conference, he will be participating in a session titled: Fostering innovation: the role of public S&T. Holbrook is a local (to me) professor as he works at Simon Fraser University, Vancouver, Canada.

That’s all of for today.

The memristor rises; commercialization and academic research in the US; carbon nanotubes could be made safer than we thought

In 2008, two memristor papers were published in Nature and Nature Nanotechnology, respectively. In the first (Nature, May 2008 [article still behind a paywall], a team at HP Labs claimed they had proved the existence of memristors (a fourth member of electrical engineering’s ‘Holy Trinity of the capacitor, resistor, and inductor’). In the second paper (Nature Nanotechnology, July 2008 [article still behind a paywall]) the team reported that they had achieved engineering control.

I mention this because (a) there’s some new excitement about memristors and (b) I love the story (you can read my summary of the 2008 story here on the Nanotech Mysteries wiki).

Unbeknownst to me in 2008, there was another team, located in Japan, whose work  on slime mould inspired research by a group at the University of California San Diego (UC San Diego)  which confirmed theorist Leon Chua’s (he first suggested memristors existed in 1971) intuition that biological organisms used memristive systems to learn. From an article (Synapse on a Chip) by Surf daddy Orca on the HPlus magazine site,

Experiments with slime molds in 2008 by Tetsu Saisuga at Hokkaido University in Sapporo sparked additional research at the University of California, San Diego by Max Di Ventra. Di Ventra was familiar with Chua’s work and built a memristive circuit that was able to learn and predict future signals. This ability turns out to be similar to the electrical activity involved in the ebb and flow of potassium and sodium ions across cellular membranes: synapses altering their response according to the frequency and strength of signals. New Scientist reports that Di Ventra’s work confirmed Chua’s suspicions that “synapses were memristors.” “The ion channel was the missing circuit element I was looking for,” says Chua, “and it already existed in nature.”

Fast forward to 2010 and a team at the University of Michigan led by Dr. Wei Lu showing how synapses behave like memristors (published in Nano Letters, DOI: 10.1021/nl904092h [article behind paywall]). (Fromthe  HPlus site article)

Scientific American describes a US military-funded project that is trying to use the memristor “to make neural computing a reality.” DARPA’s Systems of Neuromorphic Adaptive Plastic Scalable Electronics Program (SyNAPSE) is funded to create “electronic neuromorphic machine technology that is scalable to biological levels.”

I’m not sure if the research in Michigan and elsewhere is being funded by DARPA (the US Dept. of Defense’s Defense Advanced Research Project Agency) although it seems likely.

In the short term, scientists talk about energy savings (no need to reboot your computer when you turn it back on). In the longer term, they talk about hardware being able to learn. (Thanks to the Foresight Institute for the latest update on the memristor story and the pointer to HPlus.) Do visit the HPlus site as there are some videos of scientists talking about memristors and additional information (there’s yet another team working on research that is tangentially related).

Commercializing academic research in US

Thanks to Dave Bruggeman at the Pasco Phronesis blog who’s posted some information about a White House Request for Information (RFI) on commercializing academic research. This is of particular interest not just because of the discussion about innovation in Canada but also because the US National Nanotechnology Initiative’s report to PCAST (President’s Council of Advisors on Science and Technology, my comments about the webcast of the proceedings here). From the Pasco Phronesis posting about the NNI report,

While the report notes that the U.S. continues to have a strong nanotechnology sector and corresponding support from the government. However, as with most other economic and research sectors, the rest of the world is catching up, or spending enough to try and catch up to the United States.

According to the report, more attention needs to be paid to commercialization efforts (a concern not unique to nanotechnology).

I don’t know how long the White House’s RFI has been under development but it was made public at the end of March 2010 just weeks after the latest series of reports to PCAST. As for the RFI itself, from the Pasco Phronesis posting about it,

The RFI questions are organized around two basic concerns:

  • Seeking ideas for supporting the commercialization and diffusion of university research. This would include best practices, useful models, metrics (with evidence of their success), and suggested changes in federal policy and/or research funding. In addition, the RFI is interested in how commercialization ecosystems can be developed where none exist.
  • Collecting data on private proof of concept centers (POCCs). These entities seek to help get research over the so-called “Valley of Death” between demonstrable research idea and final commercial product. The RFI is looking for similar kinds of information as for commercialization in general: best practices, metrics, underlying conditions that facilitate such centers.

I find the news of this RFI a little surprising since I had the impression that commercialization of academic research in the US is far more advanced than it is here in Canada. Mind you, that impression is based on a conversation I had with a researcher a year ago who commented that his mentor at a US university rolled out more than 1 start up company every year. As I understand it researchers in Canada may start up one or two companies in their career but never a series of them.

Carbon nanotubes, is exposure ok?

There’s some new research which suggests that carbon nanotubes can be broken down by an enzyme. From the news item on Nanowerk,

A team of Swedish and American scientists has shown for the first time that carbon nanotubes can be broken down by an enzyme – myeloperoxidase (MPO) – found in white blood cells. Their discoveries are presented in Nature Nanotechnology (“Carbon nanotubes degraded by neutrophil myeloperoxidase induce less pulmonary inflammation”) and contradict what was previously believed, that carbon nanotubes are not broken down in the body or in nature. The scientists hope that this new understanding of how MPO converts carbon nanotubes into water and carbon dioxide can be of significance to medicine.

“Previous studies have shown that carbon nanotubes could be used for introducing drugs or other substances into human cells,” says Bengt Fadeel, associate professor at the Swedish medical university Karolinska Institutet. “The problem has been not knowing how to control the breakdown of the nanotubes, which can caused unwanted toxicity and tissue damage. Our study now shows how they can be broken down biologically into harmless components.”

I believe they tested single-walled carbon nanotubes (CNTs) only as the person who wrote the news release seems unaware that mutil-walled CNTs also exist. In any event, this could be very exciting if this research holds up under more testing.