Tag Archives: John Timmer

Neuristors and brainlike computing

As you might suspect, a neuristor is based on a memristor .(For a description of a memristor there’s this Wikipedia entry and you can search this blog with the tags ‘memristor’ and neuromorphic engineering’ for more here.)

Being new to neuristors ,I needed a little more information before reading the latest and found this Dec. 24, 2012 article by John Timmer for Ars Technica (Note: Links have been removed),

Computing hardware is composed of a series of binary switches; they’re either on or off. The other piece of computational hardware we’re familiar with, the brain, doesn’t work anything like that. Rather than being on or off, individual neurons exhibit brief spikes of activity, and encode information in the pattern and timing of these spikes. The differences between the two have made it difficult to model neurons using computer hardware. In fact, the recent, successful generation of a flexible neural system required that each neuron be modeled separately in software in order to get the sort of spiking behavior real neurons display.

But researchers may have figured out a way to create a chip that spikes. The people at HP labs who have been working on memristors have figured out a combination of memristors and capacitors that can create a spiking output pattern. Although these spikes appear to be more regular than the ones produced by actual neurons, it might be possible to create versions that are a bit more variable than this one. And, more significantly, it should be possible to fabricate them in large numbers, possibly right on a silicon chip.

The key to making the devices is something called a Mott insulator. These are materials that would normally be able to conduct electricity, but are unable to because of interactions among their electrons. Critically, these interactions weaken with elevated temperatures. So, by heating a Mott insulator, it’s possible to turn it into a conductor. In the case of the material used here, NbO2, the heat is supplied by resistance itself. By applying a voltage to the NbO2 in the device, it becomes a resistor, heats up, and, when it reaches a critical temperature, turns into a conductor, allowing current to flow through. But, given the chance to cool off, the device will return to its resistive state. Formally, this behavior is described as a memristor.

To get the sort of spiking behavior seen in a neuron, the authors turned to a simplified model of neurons based on the proteins that allow them to transmit electrical signals. When a neuron fires, sodium channels open, allowing ions to rush into a nerve cell, and changing the relative charges inside and outside its membrane. In response to these changes, potassium channels then open, allowing different ions out, and restoring the charge balance. That shuts the whole thing down, and allows various pumps to start restoring the initial ion balance.

Here’s a link to and a citation for the research paper described in Timmer’s article,

A scalable neuristor built with Mott memristors by Matthew D. Pickett, Gilberto Medeiros-Ribeiro, & R. Stanley Williams. Nature Materials 12, 114–117 (2013) doi:10.1038/nmat3510 Published online 16 December 2012

This paper is behind a paywall.

A July 28, 2017 news item on Nanowerk provides an update on neuristors,

A future android brain like that of Star Trek’s Commander Data might contain neuristors, multi-circuit components that emulate the firings of human neurons.

Neuristors already exist today in labs, in small quantities, and to fuel the quest to boost neuristors’ power and numbers for practical use in brain-like computing, the U.S. Department of Defense has awarded a $7.1 million grant to a research team led by the Georgia Institute of Technology. The researchers will mainly work on new metal oxide materials that buzz electronically at the nanoscale to emulate the way human neural networks buzz with electric potential on a cellular level.

A July 28, 2017 Georgia Tech news release, which originated the news item, delves further into neuristors and the proposed work leading to an artificial retina that can learn (!). This was not where I was expecting things to go,

But let’s walk expectations back from the distant sci-fi future into the scientific present: The research team is developing its neuristor materials to build an intelligent light sensor, and not some artificial version of the human brain, which would require hundreds of trillions of circuits.

“We’re not going to reach circuit complexities of that magnitude, not even a tenth,” said Alan Doolittle, a professor at Georgia Tech’s School of Electrical and Computer Engineering. “Also, currently science doesn’t really know yet very well how the human brain works, so we can’t duplicate it.”

Intelligent retina

But an artificial retina that can learn autonomously appears well within reach of the research team from Georgia Tech and Binghamton University. Despite the term “retina,” the development is not intended as a medical implant, but it could be used in advanced image recognition cameras for national defense and police work.

At the same time, it would significantly advance brain-mimicking, or neuromorphic, computing. The research field that takes its cues from what science already does know about how the brain computes to develop exponentially more powerful computing.

The retina would be comprised of an array of ultra-compact circuits called neuristors (a word combining “neuron” and “transistor”) that sense light, compute an image out of it and store the image. All three of the functions would occur simultaneously and nearly instantaneously.

“The same device senses, computes and stores the image,” Doolittle said. “The device is the sensor, and it’s the processor, and it’s the memory all at the same time.” A neuristor itself is comprised in part of devices called memristors inspired by the way human neurons work.

Brain vs. PC

That cuts out loads of processing and memory lag time that are inherent in traditional computing.

Take the device you’re reading this article on: Its microprocessor has to tap a separate memory component to get data, then do some processing, tap memory again for more data, process some more, etc. “That back-and-forth from memory to microprocessor has created a bottleneck,” Doolittle said.

A neuristor array breaks the bottleneck by emulating the extreme flexibility of biological nervous systems: When a brain computes, it uses a broad set of neural pathways that flash with enormous data. Then, later, to compute the same thing again, it will use quite different neural paths.

Traditional computer pathways, by contrast, are hardwired. For example, look at a present-day processor and you’ll see lines etched into it. Those are pathways that computational signals are limited to.

The new memristor materials at the heart of the neuristor are not etched, and signals flow through the surface very freely, more like they do through the brain, exponentially increasing the number of possible pathways computation can take. That helps the new intelligent retina compute powerfully and swiftly.

Terrorists, missing children

The retina’s memory could also store thousands of photos, allowing it to immediately match up what it sees with the saved images. The retina could pinpoint known terror suspects in a crowd, find missing children, or identify enemy aircraft virtually instantaneously, without having to trawl databases to correctly identify what is in the images.

Even if you take away the optics, the new neuristor arrays still advance artificial intelligence. Instead of light, a surface of neuristors could absorb massive data streams at once, compute them, store them, and compare them to patterns of other data, immediately. It could even autonomously learn to extrapolate further information, like calculating the third dimension out of data from two dimensions.

“It will work with anything that has a repetitive pattern like radar signatures, for example,” Doolittle said. “Right now, that’s too challenging to compute, because radar information is flying out at such a high data rate that no computer can even think about keeping up.”

Smart materials

The research project’s title acronym CEREBRAL may hint at distant dreams of an artificial brain, but what it stands for spells out the present goal in neuromorphic computing: Cross-disciplinary Electronic-ionic Research Enabling Biologically Realistic Autonomous Learning.

The intelligent retina’s neuristors are based on novel metal oxide nanotechnology materials, unique to Georgia Tech. They allow computing signals to flow flexibly across pathways that are electronic, which is customary in computing, and at the same time make use of ion motion, which is more commonly know from the way batteries and biological systems work.

The new materials have already been created, and they work, but the researchers don’t yet fully understand why.

Much of the project is dedicated to examining quantum states in the materials and how those states help create useful electronic-ionic properties. Researchers will view them by bombarding the metal oxides with extremely bright x-ray photons at the recently constructed National Synchrotron Light Source II.

Grant sub-awardee Binghamton University is located close by, and Binghamton physicists will run experiments and hone them via theoretical modeling.

‘Sea of lithium’

The neuristors are created mainly by the way the metal oxide materials are grown in the lab, which has advantages over building neuristors in a more wired way.

This materials-growing approach is conducive to mass production. Also, though neuristors in general free signals to take multiple pathways, Georgia Tech’s neuristors do it much more flexibly thanks to chemical properties.

“We also have a sea of lithium, and it’s like an infinite reservoir of computational ionic fluid,” Doolittle said. The lithium niobite imitates the way ionic fluid bathes biological neurons and allows them to flash with electric potential while signaling. In a neuristor array, the lithium niobite helps computational signaling move in myriad directions.

“It’s not like the typical semiconductor material, where you etch a line, and only that line has the computational material,” Doolittle said.

Commander Data’s brain?

“Unlike any other previous neuristors, our neuristors will adapt themselves in their computational-electronic pulsing on the fly, which makes them more like a neurological system,” Doolittle said. “They mimic biology in that we have ion drift across the material to create the memristors (the memory part of neuristors).”

Brains are far superior to computers at most things, but not all. Brains recognize objects and do motor tasks much better. But computers are much better at arithmetic and data processing.

Neuristor arrays can meld both types of computing, making them biological and algorithmic at once, a bit like Commander Data’s brain.

The research is being funded through the U.S. Department of Defense’s Multidisciplinary University Research Initiatives (MURI) Program under grant number FOA: N00014-16-R-FO05. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of those agencies.

Fascinating, non?

Free the nano—stop patenting publicly funded research

Joshua Pearce, a professor at Michigan Technological University, has written a commentary on patents and nanotechnology for Nature magazine which claims the current patent regimes strangle rather than encourage innovation. From the free article,  Physics: Make nanotechnology research open-source by Joshua Pearce in Nature 491, 519–521 (22 November 2012) doi:10.1038/491519a (Note: I have removed footnotes),

Any innovator wishing to work on or sell products based on single-walled carbon nanotubes in the United States must wade through more than 1,600 US patents that mention them. He or she must obtain a fistful of licences just to use this tubular form of naturally occurring graphite rolled from a one-atom-thick sheet. This is because many patents lay broad claims: one nanotube example covers “a composition of matter comprising at least about 99% by weight of single-wall carbon molecules”. Tens of others make overlapping claims.

Patent thickets occur in other high-tech fields, but the consequences for nanotechnology are dire because of the potential power and immaturity of the field. Advances are being stifled at birth because downstream innovation almost always infringes some early broad patents. By contrast, computing, lasers and software grew up without overzealous patenting at the outset.

Nanotechnology is big business. According to a 2011 report by technology consultants Cientifica, governments around the world have invested more than US$65 billion in nanotechnology in the past 11 years [my July 15, 2011 posting features an interview with Tim Harper, Cientfica CEO and founder, about the then newly released report]. The sector contributed more than $250 billion to the global economy in 2009 and is expected to reach $2.4 trillion a year by 2015, according to business analysts Lux Research. Since 2001, the United States has invested $18 billion in the National Nanotechnology Initiative; the 2013 US federal budget will add $1.8 billion more.

This investment is spurring intense patent filing by industry and academia. The number of nanotechnology patent applications to the US Patent and Trademark Office (USPTO) is rising each year and is projected to exceed 4,000 in 2012. Anyone who discovers a new and useful process, machine, manufacture or composition of matter, or any new and useful improvement thereof, may obtain a patent that prevents others from using that development unless they have the patent owner’s permission.

Pearce makes some convincing points (Note: I have removed a footnote),

Examples of patents that cover basic components include one owned by the multinational chip manufacturer Intel, which covers a method for making almost any nanostructure with a diameter less than 50 nm; another, held by nanotechnology company NanoSys of Palo Alto, California, covers composites consisting of a matrix and any form of nanostructure. And Rice University in Houston, Texas, has a patent covering “composition of matter comprising at least about 99% by weight of fullerene nanotubes”.

The vast majority of publicly announced IP licence agreements are now exclusive, meaning that only a single person or entity may use the technology or any other technology dependent on it. This cripples competition and technological development, because all other would-be innovators are shut out of the market. Exclusive licence agreements for building-block patents can restrict entire swathes of future innovation.

Pearce’s argument for open source,

This IP rush assumes that a financial incentive is necessary to innovate, and that without the market exclusivity (monopoly) offered by a patent, development of commercially viable products will be hampered. But there is another way, as decades of innovation for free and open-source software show. Large Internet-based companies such as Google and Facebook use this type of software. Others, such as Red Hat, make more than $1 billion a year from selling services for products that they give away for free, like Red Hat’s version of the computer operating system Linux.

An open-source model would leave nanotechnology companies free to use the best tools, materials and devices available. Costs would be cut because most licence fees would no longer be necessary. Without the shelter of an IP monopoly, innovation would be a necessity for a company to survive. Openness reduces the barrier for small, nimble entities entering the market.

John Timmer in his Nov. 23, 2012 article for Wired.co.uk expresses both support and criticism,

Some of Pearce’s solutions are perfectly reasonable. He argues that the National Science Foundation adopt the NIH model of making all research it funds open access after a one-year time limit. But he also calls for an end of patents derived from any publicly funded research: “Congress should alter the Bayh-Dole Act to exclude private IP lockdown of publicly funded innovations.” There are certainly some indications that Bayh-Dole hasn’t fostered as much innovation as it might (Pearce notes that his own institution brings in 100 times more money as grants than it does from licensing patents derived from past grants), but what he’s calling for is not so much a reform of Bayh-Dole as its elimination.

Pearce wants changes in patenting to extend well beyond the academic world, too. He argues that the USPTO should put a moratorium on patents for “nanotechnology-related fundamental science, materials, and concepts.” As we described above, the difference between a process innovation and the fundamental properties resulting in nanomaterial is a very difficult thing to define. The USPTO has struggled to manage far simpler distinctions; it’s unrealistic to expect it to manage a moratorium effectively.

While Pearce points to the 3-D printing sector admiringly, there are some issues even there, as per Mike Masnick’s Nov.  21, 2012 posting on Techdirt.com (Note:  I have removed links),

We’ve been pointing out for a while that one of the reasons why advancements in 3D printing have been relatively slow is because of patents holding back the market. However, a bunch of key patents have started expiring, leading to new opportunities. One, in particular, that has received a fair bit of attention was the Formlabs 3D printer, which raised nearly $3 million on Kickstarter earlier this year. It got a ton of well-deserved attention for being one of the first “low end” (sub ~$3,000) 3D printers with very impressive quality levels.

Part of the reason the company said it could offer such a high quality printer at a such a low price, relative to competitors, was because some of the key patents had expired, allowing it to build key components without having to pay astronomical licensing fees. A company called 3D Systems, however, claims that Formlabs missed one patent. It holds US Patent 5,597,520 on a “Simultaneous multiple layer curing in stereolithography.” While I find it ridiculous that 3D Systems is going legal, rather than competing in the marketplace, it’s entirely possible that the patent is valid. It just highlights how the system holds back competition that drives important innovation, though.

3D Systems claims that Formlabs “took deliberate acts to avoid learning” about 3D Systems’ live patents. The lawsuit claims that Formlabs looked only for expired patents — which seems like a very odd claim. Why would they only seek expired patents? …

I strongly suggest reading both Pearce’s and Timmer’s articles as they both provide some very interesting perspectives about nanotechnology IP (intellectual property) open access issues. I also recommend Mike Masnick’s piece for exposure to a rather odd but unfortunately not uncommon legal suit designed to limit competition in a relatively new technology (3-D printers).