Tag Archives: Blaise Mouttet

Memristors: they are older than you think

I got an email this morning (May 22, 2012) informing me that an article, Two centuries of memristors by Themistoklis Prodromakis, Christofer Toumazou and Leon Chua, had just been published in the journal Nature Materials. The article situates memristors in an historical context stretching back to the 19th century. Sadly, the article is behind a paywall so I won’t be copying too much material but I will attempt to give you the flavour of the piece.

The focus is on 19th century scientists and their work with what we are now calling ‘memristors’.  Before moving on to the article, here’s a good definition of a memristor, from the Wikipedia essay (note: I have removed links and footnotes),

Memristor (…  a portmanteau of “memory resistor”) is a passive two-terminal electrical component envisioned as a fundamental non-linear circuit element relating charge and magnetic flux linkage. The memristor is currently under development by a team at Hewlett-Packard.

When current flows in one direction through the device, the electrical resistance increases; and when current flows in the opposite direction, the resistance decreases. When the current is stopped, the component retains the last resistance that it had, and when the flow of charge starts again, the resistance of the circuit will be what it was when it was last active. It has a regime of operation with an approximately linear charge-resistance relationship as long as the time-integral of the current stays within certain bounds.

This Wikipedia essay also offers an historical timeline, which starts in 1960 with Bernard Widrow and his memistor, adding very nicely to the discussion in the Nature Materials article which focuses on such 19th luminaries as Sir Michael Faraday, Hertha Ayrton, Alessandro Volta, and Humphry Davy, amongst others.  Here’s a helpful description of hysteresis and how it relates to the memristor from the article (note: I have removed footnotes),

The functional properties of memristors were first documented by Chua and later on by Chua and Kang, with their main fingerprint being a pinched-hysteresis loop when subjected to a bipolar periodic signal. This particular signature has been explicitly observed in a number of devices for more than one century, while it can be extrapolated for devices that appeared as early as the dawn of the nineteenth century.

Hysteresis is typically noticed in systems and devices that possess certain inertia, causing the value of a physical property to lag behind changes in the mechanism causing it, manifesting memory.

The authors go on to outline the various  scientists who have grappled with the ‘memristive effect’ dating back to two centuries ago.  They finish their essay with this (note:  I’ve removed footnotes),

The memristor is not an invention. Rather it is a description of a basic phenomenon of nature that manifests itself in various dissipative devices, made from different materials, internal structures and architectures. We end this historical narrative by noting that even though the memristor has seen its light of joy only recently in 2008, and has been recognized as the fourth circuit element along with the resistor, capacitor and inductor, it actually predates the resistor, which was formally published by Ohm in 1827, and the inductor, which was formally published by Faraday in 1831.

If you are at all interested in memristors and have access behind the paywall, I strongly recommend reading this paper not only for the historical context but for how the authors support their contention that the memristor is a fourth circuit element.

A contrasting perspective is offered by Blaise Mouttet (discussed in my Jan. 27, 2012 posting) who contends that the what we are now calling a ‘memristor’ is part of a larger class of variable resistance systems.

To be or not to be the memristor?

The memristor (aka, memresistor), for anyone not familiar with it, is a contested ‘new’ circuit element. In my April 5, 2010 posting I gave a brief overview of the history as I understood it (the memristor was a new addition to the traditional circuit elements [the capacitor, the resistor, and the inductor]) and in my April 7, 2010 posting I conducted an interview with Forrest H Bennett III who presented an alternative view to the memristor as ‘new’ circuit element discussion.

Discussion has continued on and off since then but in the last few weeks it has become more topical with the publication of a paper (Memresistors and non-memristive zero-crossing hysteresis curves) by Blaise Mouttet at arXiv.org on Jan. 12, 2012.

I don’t feel competent to summarize the gist of Blaise’s paper so I’m excerpting a passage *from* Peter Clarke’s Jan. 18, 2012 article for EE (Electronic Engineering) Times,

Blaise Mouttet argues that the interpretation of the memristor as a fourth fundamental circuit element – after the resistor, capacitor and inductor – was incorrect and that the memory device under development at HP Labs is not actually a memristor but part of a broader class of variable resistance systems.

Since publishing his arXiv paper Mouttet has also been in discussion with an e-mailing list of researchers into non-volatile memory device physics.

Some e-mail correspondents have come out in favor of Mouttet’s position stating that trying to define any two-terminal device in which the resistance can be altered by the current passed through the device as a memristor, adds nothing to the understanding of a complex field in which there are many types of device.

The article and the comments that follow (quite interesting and technical) are worth reviewing if this area of nanoelectronics interests you.

HP Labs has responded to Blaise’s paper and subsequent debate, and before included an excerpt from the response, I want to include a few passages from Blaise’s paper,

The “memristor” was originally proposed in 1971by Leon Chua as a missing fourth fundamental circuit element linking magnetic flux and electric charge. In 2008 a group of scientists from HP led by Stan Williams claimed to have discovered this missing memristor . It is my position that HP’s “memristor” claim lacks any scientific merit. My position is not that the HP researchers have presented an incorrect model of a memristor or even an incorrect model of resistance memory. If this were the case it would not be so bad because an incorrect model could at least be proven incorrect and possibly corrected to produce a better model. My position is that the HP researchers have avoided presenting any scientifically testable model at all by hiding behind the reputation of Leon Chua and the mythology of the memristor. They have thus attempted to bypass the principle of the scientific method.

If the HP researchers had developed a realistic model for resistive memory (whether it is called “memristor” or by some other name) it could be vetted by other researchers, compared to experimental data, and determined to be true or false. If necessary the model could be modified or corrected and an improved version of the model could be produced.

This is not what has happened. (p. 1 PDF)

Here’s my excerpt of HP’s response (from Peter Clarke’s Jan. 20, 2012 article for EE Times),

The spokesperson said in email: “HP is proud of the research it has undertaken into memristor technology and the recognition this has received in the scientific community. In a little over three years, our papers, which were subject to rigorous peer review before being published in leading scientific journals, have been cited more than 1,000 times by other researchers in the field. We continue this research and collaboration with the electronics industry to bring this important technology to market.”

Deciding what something is and how fits into our understanding of how the world operates, in this case, a new circuit element, or not, has consequences beyond the actual discussion. If science is the process of posing questions, we need to test the assumptions we make (in this case, whether or not the memristor is a fourth circuit element or part of a larger system of variable resistance systems) as they can define the questions we’ll ask in the future.

As I noted earlier, I’m not competent to draw any conclusions as to which party may have the right approach but I am glad to see the discussion taking place.

*’from’ added on Sept. 27, 2016.

Memristor update

HP Labs is making memristor news again. From a news item on physorg.ocm,

HP is partnering with Korean memory chip maker Hynix Semiconductor Inc. to make chips that contain memristors. Memristors are a newly discovered building block of electrical circuits.

HP built one in 2008 that confirmed what scientists had suspected for nearly 40 years but hadn’t been able to prove: that circuits have a weird, natural ability to remember things even when they’re turned off.

I don’t remember the story quite that way, i.e.,  “confirmed what scientists had suspected for nearly 40 years” as I recall the theory that R. Stanley William (the HP Labs team leader) cites  is from Dr. Leon Chua circa 1971 and was almost forgotten. (Unbeknownst to Dr. Chua, there was a previous theorist in the 1960s who posited a similar notion which he called a memistor. See Memistors, Memristors, and the Rise of Strong Artificial Intelligence, an article by Blaise Mouttet, for a more complete history. ETA: There’s additional material from Blaise at http://www.neurdon.com/)

There’s more about HP Labs and its new partner at BBC News in an article by Jason Palmer,

Electronics giant HP has joined the world’s second-largest memory chip maker Hynix to manufacture a novel member of the electronics family.

The deal will see “memristors” – first demonstrated by HP in 2006 [I believe it was 2008] – mass produced for the first time.

Memristors promise significantly greater memory storage requiring less energy and space, and may eventually also be employed in processors.

HP says the first memristors should be widely available in about three years.

If you follow the link to the story, there’s also a brief BBC video interview with Stanley Williams.

My first 2010 story on the memristor is here and later, there’s an interview I had with Forrest H Bennet III who argues that the memristor is not a fourth element (in addition to the capacitor, resistor, and inductor) but is in fact part of an infinite table of circuit elements.

ETA: I have some additional information from the news release on the HP Labs website,

HP today announced that it has entered into a joint development agreement with Hynix Semiconductor Inc., a world leader in the manufacture of computer memory, to bring memristor technology to market.

Memristors represent a fourth basic passive circuit element. They existed only in theory until 2006 – when researchers in HP Labs’ Information and Quantum Systems Laboratory (IQSL) first intentionally demonstrated their existence.

Memory chips created with memristor technology have the potential to run considerably faster and use much less energy than Flash memory technologies, says Dr. Stanley Williams, HP Senior Fellow and IQSL founding Director.

“We believe that the memristor is a universal memory that over time could replace Flash, DRAM, and even hard drives,” he says.

Uniting HP’s world-class research and IP with a first-rate memory manufacturer will allow high-quality, memristor-based memory to be developed quickly and on a mass scale, Williams adds.

Also, the video interview with Dr. Williams is on youtube and is not a BBC video as I believed. So here’s the interview,

Science in the British election and CASE; memristor and artificial intelligence; The Secret in Their Eyes, an allegory for post-Junta Argentina?

I’ve been meaning to mention the upcoming (May 6, 2010) British election for the last while as I’ve seen notices of party manifestos that mention science (!) but it was one of Dave Bruggeman’s postings on Pasco Fhronesis that tipped the balance for me. From his posting,

CaSE [Campaign for Science and Engineering] sent each party leader a letter asking for their positions with respect to science and technology issues. The Conservatives and the Liberal Democrats have responded so far (while the Conservative leader kept mum on science before the campaign, now it’s the Prime Minister who has yet to speak on it). Of the two letters, the Liberal Democrats have offered more detailed proposals than the Conservatives, and the Liberal Democrats have also addressed issues of specific interest to the U.K. scientific community to a much greater degree.

(These letters are in addition to the party manifestos which each mention science.) I strongly recommend the post as Bruggeman goes on to give a more detailed analysis and offer a few speculations.

The Liberal Democrats offer a more comprehensive statement but they are a third party who gained an unexpected burst of support after the first national debate. As anyone knows, the second debate (to be held around noon (PT) today) or something else for that matter could change all that.

I did look at the CaSE site which provides an impressive portfolio of materials related to this election on its home page. As for the organization’s mission, before getting to that you might find its history instructive,

CaSE was launched in March 2005, evolving out of its predecessor Save British Science [SBS]. …

SBS was founded in 1986, following the placement of an advertisement in The Times newspaper. The idea came from a small group of university scientists brought together by a common concern about the difficulties they were facing in obtaining the funds for first class research.

The original plan was simply to buy a half-page adverisement in The Times to make the point, and the request for funds was spread via friends and colleagues in other universities. The response was overwhelming. Within a few weeks about 1500 contributors, including over 100 Fellows of the Royal Society and most of the British Nobel prize winners, had sent more than twice the sum needed. The advertisement appeared on 13th January 1986, and the balance of the money raised was used to found the Society, taking as its name the title of the advertisement.

Now for their mission statement,

CaSE is now an established feature of the science and technology policy scene, supported among universities and the learned societies, and able to attract media attention. We are accepted by Government as an organisation able to speak for a wide section of the science and engineering community in a constructive but also critical and forceful manner. We are free to speak without the restraints felt by learned societies and similar bodies, and it is good for Government to know someone is watching closely.

I especially like the bit where they feel its “good for Government” to know someone is watching.

The folks at the Canadian Science Policy Centre (CSPC) are also providing information about the British election and science. As you’d expect it’s not nearly as comprehensive but, if you’re interested, you can check out the CSPC home page.

I haven’t had a chance to read the manifestos and other materials closely enough to be able to offer much comment. It is refreshing to see the issue mentioned by all the parties during the election as opposed to having science dismissed as a ’boutique issue’ as an assistant to my local (Canadian)l Member of Parliament described it to me.

Memristors and artificial intelligence

The memristor story has ‘legs’, as they say. This morning I found an in-depth story by Michael Berger on Nanowerk titled, Nanotechnology’s Road to Artificial Brains, where he interviews Dr. Wei Lu about his work with memristors and neural synapses (mentioned previously on this blog here). Coincidentally I received a comment yesterday from Blaise Mouttet about an article he’d posted on Google September 2009 titled, Memistors, Memristors, and the Rise of Strong Artificial Intelligence.

Berger’s story focuses on a specific piece of research and possible future applications. From the Nanowerk story,

If you think that building an artificial human brain is science fiction, you are probably right – for now. But don’t think for a moment that researchers are not working hard on laying the foundations for what is called neuromorphic engineering – a new interdisciplinary discipline that includes nanotechnologies and whose goal is to design artificial neural systems with physical architectures similar to biological nervous systems.

One of the key components of any neuromorphic effort is the design of artificial synapses. The human brain contains vastly more synapses than neurons – by a factor of about 10,000 – and therefore it is necessary to develop a nanoscale, low power, synapse-like device if scientists want to scale neuromorphic circuits towards the human brain level.

Berger goes on to explain how Lu’s work with memristors relates to this larger enterprise which is being pursued by many scientists around the world.

By contrast Mouttet offers an historical context for the work on memristors along with a precise technical explanation  and why it is applicable to work in artificial intelligence. From Mouttet’s essay,

… memristive systems integrate data storage and data processing capabilities in a single device which offers the potential to more closely emulate the capabilities of biological intelligence.

If you are interested in exploring further, I suggest starting with Mouttet’s article first as it lays the groundwork for better understanding memristors and also Berger’s story about artificial neural synapses.

The secret in their eyes (movie review)

I woke up at 6 am the other morning thinking about a movie I saw this last Sunday (April 18, 2010). That doesn’t often happen to me,  especially as I get more jaded with time but something about ‘The Secret in Their Eyes‘, the Argentinean movie that won this year’s Oscar for Best Foreign Language Film woke me up.

Before going further, a précis of the story: a retired man (in his late 50s?) is trying to write a novel based on a rape/homicide case that he investigated in the mid-1970s. He’s haunted by it and spends much of the movie calling back memories of both a case and a love he tried to bury. Writing his ‘novel’ compels him to reinvestigate the case (he was an investigator for the judge) and reestablish contact with the victim’s grief-stricken husband and with the woman he loved  who was his boss (the judge) and also from a more prestigious social class.

The movie offers some comedy although it can mostly be described as a thriller, a procedural, and a love story. It can also be seen as an allegory. The victim represents Argentina as a country. The criminal’s treatment (he gets rewarded— initially) represents how the military junta controlled Argentina after Juan Peron’s death in 1974. It seemed to me that much of this movie was an investigation about how people cope and recover (or don’t) from a hugely traumatic experience.

I don’t know much about Argentina and I have no Spanish language skills (other than recognizing an occasional word when it sounds like a French one). Consequently, this history is fairly sketchy and derived from secondary and tertiary sources. In the 1950s, Juan Peron (a former member of the military) led  a very repressive regime which was eventually pushed out of office. By the 1970s he was asked to return which he did. He died there in 1974 and sometime after a military Junta took control of the government. Amongst other measures, they kidnapped thousands of people (usually young and often students, teachers [the victim in the movie is a teacher], political activists/enemies, and countless others) and ‘disappeared’ them.

Much of the population tried to ignore or hide from what was going on. A  documentary released in the US  in 1985, Las Madres de la Plaza de Mayo, details the story of a group of middle-class women who are moved to protest, after years of trying to endure, when their own children are ‘disappeared’.

In the movie we see what happens when bullies take over control. The criminal gets rewarded, the investigator/writer is sent away for protection after a colleague becomes collateral damage, the judge’s family name protects her, and the grieving husband has to find his own way to deal with the situation.

The movie offers both a gothic twist towards the end and a very moving perspective on how one deals with the guilt for one’s complicity and for one’s survival.

ETA: (April 27, 2010) One final insight, the movie suggests that art/creative endeavours such as writing a novel (or making a movie?) can be a means for confession, redemption, and/or healing past wounds.

I think what makes the movie so good is the number of readings that are possible. You can take a look at some of what other reviewers had to say: Katherine Monk at the Vancouver Sun, Curtis Woloschuk at the Westender, and Ken Eisner at the Georgia Straight.

Kudos to the director and screen writer, Juan José Campanella and to the leads: Ricardo Darín (investigator/writer), Soledad Villamil (judge), Pablo Rago (husband), Javier Godino (criminal), Guillermo Francella (colleague who becomes collateral damage) and all of the other actor s in the company. Even the smallest role was beautifully realized.

One final thing, whoever translated and wrote the subtitles should get an award. I don’t know how the person did it but the use of language is brilliant. I’ve never before seen subtitles that managed to convey the flavour of the verbal exchanges taking place on screen.

I liked the movie, eh?

Measuring professional and national scientific achievements; Canadian science policy conferences

I’m going to start with an excellent study about publication bias in science papers and careerism that I stumbled across this morning on physorg.com (from the news item),

Dr [Daniele] Fanelli [University of Edinburgh] analysed over 1300 papers that declared to have tested a hypothesis in all disciplines, from physics to sociology, the principal author of which was based in a U.S. state. Using data from the National Science Foundation, he then verified whether the papers’ conclusions were linked to the states’ productivity, measured by the number of papers published on average by each academic.

Findings show that papers whose authors were based in more “productive” states were more likely to support the tested hypothesis, independent of discipline and funding availability. This suggests that scientists working in more competitive and productive environments are more likely to make their results look “positive”. It remains to be established whether they do this by simply writing the papers differently or by tweaking and selecting their data.

I was happy to find out that Fanelli’s paper has been published by the PLoS [Public Library of Science] ONE , an open access journal. From the paper [numbers in square brackets are citations found at the end of the published paper],

Quantitative studies have repeatedly shown that financial interests can influence the outcome of biomedical research [27], [28] but they appear to have neglected the much more widespread conflict of interest created by scientists’ need to publish. Yet, fears that the professionalization of research might compromise its objectivity and integrity had been expressed already in the 19th century [29]. Since then, the competitiveness and precariousness of scientific careers have increased [30], and evidence that this might encourage misconduct has accumulated. Scientists in focus groups suggested that the need to compete in academia is a threat to scientific integrity [1], and those guilty of scientific misconduct often invoke excessive pressures to produce as a partial justification for their actions [31]. Surveys suggest that competitive research environments decrease the likelihood to follow scientific ideals [32] and increase the likelihood to witness scientific misconduct [33] (but see [34]). However, no direct, quantitative study has verified the connection between pressures to publish and bias in the scientific literature, so the existence and gravity of the problem are still a matter of speculation and debate [35].

Fanelli goes on to describe his research methods and how he came to his conclusion that the pressure to publish may have a significant impact on ‘scientific objectivity’.

This paper provides an interesting counterpoint to a discussion about science metrics or bibliometrics taking place on (the journal) Nature’s website here. It was stimulated by Judith Lane’s recent article titled, Let’s Make Science Metrics More Scientific. The article is open access and comments are invited. From the article [numbers in square brackets refer to citations found at the end of the article],

Measuring and assessing academic performance is now a fact of scientific life. Decisions ranging from tenure to the ranking and funding of universities depend on metrics. Yet current systems of measurement are inadequate. Widely used metrics, from the newly-fashionable Hirsch index to the 50-year-old citation index, are of limited use [1]. Their well-known flaws include favouring older researchers, capturing few aspects of scientists’ jobs and lumping together verified and discredited science. Many funding agencies use these metrics to evaluate institutional performance, compounding the problems [2]. Existing metrics do not capture the full range of activities that support and transmit scientific ideas, which can be as varied as mentoring, blogging or creating industrial prototypes.

The range of comments is quite interesting, I was particularly taken by something Martin Fenner said,

Science metrics are not only important for evaluating scientific output, they are also great discovery tools, and this may indeed be their more important use. Traditional ways of discovering science (e.g. keyword searches in bibliographic databases) are increasingly superseded by non-traditional approaches that use social networking tools for awareness, evaluations and popularity measurements of research findings.

(Fenner’s blog along with more of his comments about science metrics can be found here. If this link doesn’t work, you can get to Fenner’s blog by going to Lane’s Nature article and finding him in the comments section.)

There are a number of issues here: how do we measure science work (citations in other papers?) as well as how do we define the impact of science work (do we use social networks?) which brings the question to: how do we measure the impact when we’re talking about a social network?

Now, I’m going to add timeline as an issue. Over what period of time are we measuring the impact? I ask the question because of the memristor story.  Dr. Leon Chua wrote a paper in 1971 that, apparently, didn’t receive all that much attention at the time but was cited in a 2008 paper which received widespread attention. Meanwhile, Chua had continued to theorize about memristors in a 2003 paper that received so little attention that Chua abandoned plans to write part 2. Since the recent burst of renewed interest in the memristor and his 2003 paper, Chua has decided to follow up with part 2, hopefully some time in 2011. (as per this April 13, 2010 posting) There’s one more piece to the puzzle: an earlier paper by F. Argall. From Blaise Mouttet’s April 5, 2010 comment here on this blog,

In addition HP’s papers have ignored some basic research in TiO2 multi-state resistance switching from the 1960’s which disclose identical results. See F. Argall, “Switching Phenomena in Titanium Oxide thin Films,” Solid State Electronics, 1968.
http://pdf.com.ru/a/ky1300.pdf

[ETA: April 22, 2010 Blaise Mouttet has provided a link to an article  which provides more historical insight into the memristor story. http://knol.google.com/k/memistors-memristors-and-the-rise-of-strong-artificial-intelligence#

How do you measure or even track  all of that? Shy of some science writer taking the time to pursue the story and write a nonfiction book about it.

I’m not counselling that the process be abandoned but since it seems that the people are revisiting the issues, it’s an opportune time to get all the questions on the table.

As for its importance, this process of trying to establish better and new science metrics may seem irrelevant to most people but it has a much larger impact than even the participants appear to realize. Governments measure their scientific progress by touting the number of papers their scientists have produced amongst other measures such as  patents. Measuring the number of published papers has an impact on how governments want to be perceived internationally and within their own borders. Take for example something which has both international and national impact, the recent US National Nanotechnology Initiative (NNI) report to the President’s Council of Science and Technology Advisors (PCAST). The NNI used the number of papers published as a way of measuring the US’s possibly eroding leadership in the field. (China published about 5000 while the US published about 3000.)

I don’t have much more to say other than I hope to see some new metrics.

Canadian science policy conferences

We have two such conferences and both are two years old in 2010. The first one is being held in Gatineau, Québec, May 12 – 14, 2010. Called Public Science  in Canada: Strengthening Science and Policy to Protect Canadians [ed. note: protecting us from what?], the target audience for the conference seems to be government employees. David Suzuki (tv host, scientist, evironmentalist, author, etc.) and Preston Manning (ex-politico) will be co-presenting a keynote address titled: Speaking Science to Power.

The second conference takes place in Montréal, Québec, Oct. 20-22, 2010. It’s being produced by the Canadian Science Policy Centre. Other than a notice on the home page, there’s not much information about their upcoming conference yet.

I did note that Adam Holbrook (aka J. Adam Holbrook) is both speaking at the May conference and is an advisory committee member for the folks who are organizing the October conference. At the May conference, he will be participating in a session titled: Fostering innovation: the role of public S&T. Holbrook is a local (to me) professor as he works at Simon Fraser University, Vancouver, Canada.

That’s all of for today.