Tag Archives: Texas Instruments

Graphene Malaysia 2016 gathering and Malaysia’s National Graphene Action Plan 2020

Malaysia is getting ready to host a graphene conference according to an Oct. 10, 2016 news item on Nanotechnology Now,

The Graphene Malaysia 2016 [Nov. 8 – 9, 2016] (www.graphenemalaysiaconf.com) is jointly organized by NanoMalaysia Berhad and Phantoms Foundation. The conference will be centered on graphene industry interaction and collaborative innovation. The event will be launched under the National Graphene Action Plan 2020 (NGAP 2020), which will generate about 9,000 jobs and RM20 (US$4.86) billion GNI impact by the year 2020.

First speakers announced:
Murni Ali (Nanomalaysia, Malaysia) | Francesco Bonaccorso (Istituto Italiano di Tecnologia, Italy) | Antonio Castro Neto (NUS, Singapore) | Antonio Correia (Phantoms Foundation, Spain)| Pedro Gomez-Romero (ICN2 (CSIC-BIST), Spain) | Shu-Jen Han (Nanoscale Science & Technology IBM T.J. Watson Research Center, USA) | Kuan-Tsae Huang (AzTrong, USA/Taiwan) | Krzysztof Koziol (FGV Cambridge Nanosystems, UK) | Taavi Madiberk (Skeleton Technologies, Estonia) | Richard Mckie (BAE Systems, UK) | Pontus Nordin (Saab AB, Saab Aeronautics, Sweden) | Elena Polyakova (Graphene Laboratories Inc., USA) | Ahmad Khairuddin Abdul Rahim (Malaysian Investment Development Authority (MIDA), Malaysia) | Adisorn Tuantranont (Thailand Organic and Printed Electronics Innovation Center, Thailand) |Archana Venugopal (Texas Instruments, USA) | Won Jong Yoo (Samsung-SKKU Graphene-2D Center (SSGC), South Korea) | Hongwei Zhu (Tsinghua University, China)

You can check for more information and deadlines in the Nanotechnology Now Oct. 10, 2016 news item.

The Graphene Malalysia 2016 conference website can be found here and Malaysia’s National Graphene Action Plan 2020, which is well written, can be found here (PDF).  This portion from the executive summary offers some insight into Malyasia’s plans to launch itself into the world of high income nations,

Malaysia’s aspiration to become a high-income nation by 2020 with improved jobs and better outputs is driving the country’s shift away from “business as usual,” and towards more innovative and high value add products. Within this context, and in accordance with National policies and guidelines, Graphene, an emerging, highly versatile carbon-based nanomaterial, presents a unique opportunity for Malaysia to develop a high value economic ecosystem within its industries.  Isolated only in 2004, Graphene’s superior physical properties such as electrical/ thermal conductivity, high strength and high optical transparency, combined with its manufacturability have raised tremendous possibilities for its application across several functions and make it highly interesting for several applications and industries.  Currently, Graphene is still early in its development cycle, affording Malaysian companies time to develop their own applications instead of relying on international intellectual property and licenses.

Considering the potential, several leading countries are investing heavily in associated R&D. Approaches to Graphene research range from an expansive R&D focus (e.g., U.S. and the EU) to more focused approaches aimed at enhancing specific downstream applications with Graphene (e.g., South Korea). Faced with the need to push forward a multitude of development priorities, Malaysia must be targeted in its efforts to capture Graphene’s potential, both in terms of “how to compete” and “where to compete”. This National Graphene Action Plan 2020 lays out a set of priority applications that will be beneficial to the country as a whole and what the government will do to support these efforts.

Globally, much of the Graphene-related commercial innovation to date has been upstream, with producers developing techniques to manufacture Graphene at scale. There has also been some development in downstream sectors, as companies like Samsung, Bayer MaterialScience, BASF and Siemens explore product enhancement with Graphene in lithium-ion battery anodes and flexible displays, and specialty plastic and rubber composites. However the speed of development has been uneven, offering Malaysian industries willing to invest in innovation an opportunity to capture the value at stake. Since any innovation action plan has to be tailored to the needs and ambitions of local industry, Malaysia will focus its Graphene action plan initially on larger domestic industries (e.g., rubber) and areas already being targeted by the government for innovation such as energy storage for electric vehicles and conductive inks.

In addition to benefiting from the physical properties of Graphene, Malaysian downstream application providers may also capture the benefits of a modest input cost advantage for the domestic production of Graphene.  One commonly used Graphene manufacturing technique, the chemical vapour deposition (CVD) production method, requires methane as an input, which can be sourced economically from local biomass. While Graphene is available commercially from various producers around the world, downstream players may be able to enjoy some cost advantage from local Graphene supply. In addition, co-locating with a local producer for joint product development has the added benefit of speeding up the R&D lifecycle.

That business about finding downstream applications could also to the Canadian situation where we typically offer our resources (upstream) but don’t have an active downstream business focus. For example, we have graphite mines in Ontario and Québec which supply graphite flakes for graphene production which is all upstream. Less well developed are any plans for Canadian downstream applications.

Finally, it was interesting to note that the Phantoms Foundation is organizing this Malaysian conference since the same organization is organizing the ‘2nd edition of Graphene & 2D Materials Canada 2016 International Conference & Exhibition’ (you can find out more about the Oct. 18 – 20, 2016 event in my Sept. 23, 2016 posting). I think the Malaysians have a better title for their conference, far less unwieldy.

Graphene Canada and its second annual conference

An Aug. 31, 2016 news item on Nanotechnology Now announces Canada’s second graphene-themed conference,

The 2nd edition of Graphene & 2D Materials Canada 2016 International Conference & Exhibition (www.graphenecanadaconf.com) will take place in Montreal (Canada): 18-20 October, 2016.

– An industrial forum with focus on Graphene Commercialization (Abalonyx, Alcereco Inc, AMO GmbH, Avanzare, AzTrong Inc, Bosch GmbH, China Innovation Alliance of the Graphene Industry (CGIA), Durham University & Applied Graphene Materials, Fujitsu Laboratories Ltd., Hanwha Techwin, Haydale, IDTechEx, North Carolina Central University & Chaowei Power Ltd, NTNU&CrayoNano, Phantoms Foundation, Southeast University, The Graphene Council, University of Siegen, University of Sunderland and University of Waterloo)
– Extensive thematic workshops in parallel (Materials & Devices Characterization, Chemistry, Biosensors & Energy and Electronic Devices)
– A significant exhibition (Abalonyx, Go Foundation, Grafoid, Group NanoXplore Inc., Raymor | Nanointegris and Suragus GmbH)

As I noted in my 2015 post about Graphene Canada and its conference, the group is organized in a rather interesting fashion and I see the tradition continues, i.e., the lead organizers seem to be situated in countries other than Canada. From the Aug. 31, 2016 news item on Nanotechnology Now,

Organisers: Phantoms Foundation [located in Spain] www.phantomsnet.net
Catalan Institute of Nanoscience and Nanotechnology – ICN2 (Spain) | CEMES/CNRS (France) | GO Foundation (Canada) | Grafoid Inc (Canada) | Graphene Labs – IIT (Italy) | McGill University (Canada) | Texas Instruments (USA) | Université Catholique de Louvain (Belgium) | Université de Montreal (Canada)

You can find the conference website here.

Canada and some graphene scene tidbits

For a long time It seemed as if every country in the world, except Canada, had some some sort of graphene event. According to a July 16, 2015 news item on Nanotechnology Now, Canada has now stepped up, albeit, in a peculiarly Canadian fashion. First the news,

Mid October [Oct. 14 -16, 2015], the Graphene & 2D Materials Canada 2015 International Conference & Exhibition (www.graphenecanada2015.com) will take place in Montreal (Canada).

I found a July 16, 2015 news release (PDF) announcing the Canadian event on the lead organizer’s (Phantoms Foundation located in Spain) website,

On the second day of the event (15th October, 2015), an Industrial Forum will bring together top industry leaders to discuss recent advances in technology developments and business opportunities in graphene commercialization.
At this stage, the event unveils 38 keynote & invited speakers. On the Industrial Forum 19 of them will present the latest in terms of Energy, Applications, Production and Worldwide Initiatives & Priorities.

Plenary:
Gary Economo (Grafoid Inc., Canada)
Khasha Ghaffarzadeh (IDTechEx, UK)
Shu-Jen Han (IBM T.J. Watson Research Center, USA)
Bor Z. Jang (Angstron Materials, USA)
Seongjun Park (Samsung Advanced Institute of Technology (SAIT), Korea)
Chun-Yun Sung (Lockheed Martin, USA)

Parallel Sessions:
Gordon Chiu (Grafoid Inc., Canada)
Jesus de la Fuente (Graphenea, Spain)
Mark Gallerneault (ALCERECO Inc., Canada)
Ray Gibbs (Haydale Graphene Industries, UK)
Masataka Hasegawa (AIST, Japan)
Byung Hee Hong (SNU & Graphene Square, Korea)
Tony Ling (Jestico + Whiles, UK)
Carla Miner (SDTC, Canada)
Gregory Pognon (THALES Research & Technology, France)
Elena Polyakova (Graphene Laboratories Inc, USA)
Federico Rosei (INRS–EMT, Université du Québec, Canada)
Aiping Yu (University of Waterloo, Canada)
Hua Zhang (MSE-NTU, Singapore)

Apart from the industrial forum, several industry-related activities will be organized:
– Extensive thematic workshops in parallel (Standardization, Materials & Devices Characterization, Bio & Health and Electronic Devices)
– An exhibition carried out with the latest graphene trends (Grafoid, RAYMOR NanoIntegris, Nanomagnetics Instruments, ICEX and Xerox Research Centre of Canada (XRCC) already confirmed)
– B2B meetings to foster technical cooperation in the field of Graphene

It’s still possible to contribute to the event with an oral presentation. The call for abstracts is open until July, 20 [2015]. [emphasis mine]

Graphene Canada 2015 is already supported by Canada’s leading graphene applications developer, Grafoid Inc., Tourisme Montréal and Université de Montréal.

This is what makes the event peculiarly Canadian: multiculturalism, anyone? From the news release,

Organisers: Phantoms Foundation www.phantomsnet.net & Grafoid Foundation (lead organizers)

CEMES/CNRS (France) | Grafoid (Canada) | Catalan Institute of Nanoscience and Nanotechnology – ICN2 (Spain) | IIT (Italy) | McGill University, Canada | Texas Instruments (USA) | Université Catholique de Louvain (Belgium) | Université de Montreal, Canada

It’s billed as a ‘Canada Graphene 2015’ and, as I recall, these types of events don’t usually have so many other countries listed as organizers. For example, UK Graphene 2015 would have mostly or all of its organizers (especially the leads) located in the UK.

Getting to the Canadian content, I wrote about Grafoid at length tracking some of its relationships to companies it owns, a business deal with Hydro Québec, and a partnership with the University of Waterloo, and a nonrepayable grant from the Canadian federal government (Sustainable Development Technology Canada [SDTC]) in a Feb. 23, 2015 posting. Do take a look at the post if you’re curious about the heavily interlinked nature of the Canadian graphene scene and take another look at the list of speakers and their agencies (Mark Gallerneault of ALCERECO [partially owned by Grafoid], Carla Miner of SDTC [Grafoid received monies from the Canadian federal department],  Federico Rosei of INRS–EMT, Université du Québec [another Quebec link], Aiping Yu, University of Waterloo [an academic partner to Grafoid]). The Canadian graphene community is a small one so it’s not surprising there are links between the Canadian speakers but it does seem odd that Lomiko Metals is not represented here. Still, new speakers have been announced since the news release (e.g., Frank Koppens of ICFO, Spain, and Vladimir Falko of Lancaster University, UK) so  time remains.

Meanwhile, Lomiko Metals has announced in a July 17, 2015 news item on Azonano that Graphene 3D labs has changed the percentage of its outstanding shares affecting the percentage that Lomiko owns, amid some production and distribution announcements. The bit about launching commercial sales of its graphene filament seems more interesting to me,

On March 16, 2015 Graphene 3D Lab (TSXV:GGG) (OTCQB:GPHBF) announced that it launched commercial sales of its Conductive Graphene Filament for 3D printing. The filament incorporates highly conductive proprietary nano-carbon materials to enhance the properties of PLA, a widely used thermoplastic material for 3D printing; therefore, the filament is compatible with most commercially available 3D printers. The conductive filament can be used to print conductive traces (similar to as used in circuit boards) within 3D printed parts for electronics.

So, that’s all I’ve got for Canada’s graphene scene.

Roadmap to neuromorphic engineering digital and analog) for the creation of artificial brains *from the Georgia (US) Institute of Technology

While I didn’t mention neuromorphic engineering in my April 16, 2014 posting which focused on the more general aspect of nanotechnology in Transcendence, a movie starring Johnny Depp and opening on April 18, that specialty (neuromorphic engineering) is what makes the events in the movie ‘possible’ (assuming very large stretches of imagination bringing us into the realm implausibility and beyond). From the IMDB.com plot synopsis for Transcendence,

Dr. Will Caster (Johnny Depp) is the foremost researcher in the field of Artificial Intelligence, working to create a sentient machine that combines the collective intelligence of everything ever known with the full range of human emotions. His highly controversial experiments have made him famous, but they have also made him the prime target of anti-technology extremists who will do whatever it takes to stop him. However, in their attempt to destroy Will, they inadvertently become the catalyst for him to succeed to be a participant in his own transcendence. For his wife Evelyn (Rebecca Hall) and best friend Max Waters (Paul Bettany), both fellow researchers, the question is not if they canbut [sic] if they should. Their worst fears are realized as Will’s thirst for knowledge evolves into a seemingly omnipresent quest for power, to what end is unknown. The only thing that is becoming terrifyingly clear is there may be no way to stop him.

In the film, Carter’s intelligence/consciousness is uploaded to the computer, which suggests the computer has human brainlike qualities and abilities. The effort to make computer or artificial intelligence more humanlike is called neuromorphic engineering and according to an April 17, 2014 news item on phys.org, researchers at the Georgia Institute of Technology (Georgia Tech) have published a roadmap for this pursuit,

In the field of neuromorphic engineering, researchers study computing techniques that could someday mimic human cognition. Electrical engineers at the Georgia Institute of Technology recently published a “roadmap” that details innovative analog-based techniques that could make it possible to build a practical neuromorphic computer.

A core technological hurdle in this field involves the electrical power requirements of computing hardware. Although a human brain functions on a mere 20 watts of electrical energy, a digital computer that could approximate human cognitive abilities would require tens of thousands of integrated circuits (chips) and a hundred thousand watts of electricity or more – levels that exceed practical limits.

The Georgia Tech roadmap proposes a solution based on analog computing techniques, which require far less electrical power than traditional digital computing. The more efficient analog approach would help solve the daunting cooling and cost problems that presently make digital neuromorphic hardware systems impractical.

“To simulate the human brain, the eventual goal would be large-scale neuromorphic systems that could offer a great deal of computational power, robustness and performance,” said Jennifer Hasler, a professor in the Georgia Tech School of Electrical and Computer Engineering (ECE), who is a pioneer in using analog techniques for neuromorphic computing. “A configurable analog-digital system can be expected to have a power efficiency improvement of up to 10,000 times compared to an all-digital system.”

An April 16, 2014 Georgia Tech news release by Rick Robinson, which originated the news item, describes why Hasler wants to combine analog (based on biological principles) and digital computing approaches to the creation of artificial brains,

Unlike digital computing, in which computers can address many different applications by processing different software programs, analog circuits have traditionally been hard-wired to address a single application. For example, cell phones use energy-efficient analog circuits for a number of specific functions, including capturing the user’s voice, amplifying incoming voice signals, and controlling battery power.

Because analog devices do not have to process binary codes as digital computers do, their performance can be both faster and much less power hungry. Yet traditional analog circuits are limited because they’re built for a specific application, such as processing signals or controlling power. They don’t have the flexibility of digital devices that can process software, and they’re vulnerable to signal disturbance issues, or noise.

In recent years, Hasler has developed a new approach to analog computing, in which silicon-based analog integrated circuits take over many of the functions now performed by familiar digital integrated circuits. These analog chips can be quickly reconfigured to provide a range of processing capabilities, in a manner that resembles conventional digital techniques in some ways.

Over the last several years, Hasler and her research group have developed devices called field programmable analog arrays (FPAA). Like field programmable gate arrays (FPGA), which are digital integrated circuits that are ubiquitous in modern computing, the FPAA can be reconfigured after it’s manufactured – hence the phrase “field-programmable.”

Hasler and Marr’s 29-page paper traces a development process that could lead to the goal of reproducing human-brain complexity. The researchers investigate in detail a number of intermediate steps that would build on one another, helping researchers advance the technology sequentially.

For example, the researchers discuss ways to scale energy efficiency, performance and size in order to eventually achieve large-scale neuromorphic systems. The authors also address how the implementation and the application space of neuromorphic systems can be expected to evolve over time.

“A major concept here is that we have to first build smaller systems capable of a simple representation of one layer of human brain cortex,” Hasler said. “When that system has been successfully demonstrated, we can then replicate it in ways that increase its complexity and performance.”

Among neuromorphic computing’s major hurdles are the communication issues involved in networking integrated circuits in ways that could replicate human cognition. In their paper, Hasler and Marr emphasize local interconnectivity to reduce complexity. Moreover, they argue it’s possible to achieve these capabilities via purely silicon-based techniques, without relying on novel devices that are based on other approaches.

Commenting on the recent publication, Alice C. Parker, a professor of electrical engineering at the University of Southern California, said, “Professor Hasler’s technology roadmap is the first deep analysis of the prospects for large scale neuromorphic intelligent systems, clearly providing practical guidance for such systems, with a nearer-term perspective than our whole-brain emulation predictions. Her expertise in analog circuits, technology and device models positions her to provide this unique perspective on neuromorphic circuits.”

Eugenio Culurciello, an associate professor of biomedical engineering at Purdue University, commented, “I find this paper to be a very accurate description of the field of neuromorphic data processing systems. Hasler’s devices provide some of the best performance per unit power I have ever seen and are surely on the roadmap for one of the major technologies of the future.”

Said Hasler: “In this study, we conclude that useful neural computation machines based on biological principles – and potentially at the size of the human brain — seems technically within our grasp. We think that it’s more a question of gathering the right research teams and finding the funding for research and development than of any insurmountable technical barriers.”

Here’s a link to and a citation for the roadmap,

Finding a roadmap to achieve large neuromorphic hardware systems by Jennifer Hasler and Bo Marr.  Front. Neurosci. (Frontiers in Neuroscience), 10 September 2013 | doi: 10.3389/fnins.2013.00118

This is an open access article (at least, the HTML version is).

I have looked at Hasler’s roadmap and it provides a good and readable overview (even for an amateur like me; Note: you do have to need some tolerance for ‘not knowing’) of the state of neuromorphic engineering’s problems, and suggestions for overcoming them. Here’s a description of a human brain and its power requirements as compared to a computer’s (from the roadmap),

One of the amazing thing about the human brain is its ability to perform tasks beyond current supercomputers using roughly 20 W of average power, a level smaller than most individual computer microprocessor chips. A single neuron emulation can tax a high performance processor; given there is 1012 neurons operating at 20 W, each neuron consumes 20 pW average power. Assuming a neuron is conservatively performing the wordspotting computation (1000 synapses), 100,000 PMAC (PMAC = “Peta” MAC = 1015 MAC/s) would be required to duplicate the neural structure. A higher computational efficiency due to active dendritic line channels is expected as well as additional computation due to learning. The efficiency of a single neuron would be 5000 PMAC/W (or 5 TMAC/μW). A similar efficiency for 1011 neurons and 10,000 synapses is expected.

Building neuromorphic hardware requires that technology must scale from current levels given constraints of power, area, and cost: all issues typical in industrial and defense applications; if hardware technology does not scale as other available technologies, as well as takes advantage of the capabilities of IC technology that are currently visible, it will not be successful.

One of my main areas of interest is the memristor (a nanoscale ‘device/circuit element’ which emulates synaptic plasticity), which was mentioned in a way that allows me to understand how the device fits (or doesn’t fit) into the overall conceptual framework (from the roadmap),

The density for a 10 nm EEPROM device acting as a synapse begs the question of whether other nanotechnologies can improve on the resulting Si [silicon] synapse density. One transistor per synapse is hard to beat by any approach, particularly in scaled down Si (like 10 nm), when the synapse memory, computation, and update is contained within the EEPROM device. Most nano device technologies [i.e., memristors (Snider et al., 2011)] show considerable difficulties to get to two-dimensional arrays at a similar density level. Recently, a team from U. of Michigan announced the first functioning memristor two-dimensional (30 × 30) array built on a CMOS chip in 2012 (Kim et al., 2012), claiming applications in neuromorphic engineering, the same group has published innovative devices for digital (Jo and Lu, 2009) and analog applications (Jo et al., 2011).

I notice that the reference to the University’s of Michigan is relatively neutral in tone and the memristor does not figure substantively in Hasler’s roadmap.

Intriguingly, there is a section on commercialization; I didn’t think the research was at that stage yet (from the roadmap),

Although one can discuss how to build a cortical computer on the size of mammals and humans, the question is how will the technology developed for these large systems impact commercial development. The cost for ICs [integrated circuits or chips] alone for cortex would be approximately $20 M in current prices, which although possible for large users, would not be common to be found in individual households. Throughout the digital processor approach, commercial market opportunities have driven the progress in the field. Getting neuromorphic technology integrated into commercial environment allows us to ride this powerful economic “engine” rather than pull.

In most applications, the important commercial issues include minimization of cost, time to market, just sufficient performance for the application, power consumed, size and weight. The cost of a system built from ICs is, at a macro-level, a function of the area of those ICs, which then affects the number of ICs needed system wide, the number of components used, and the board space used. Efficiency of design tools, testing time and programming time also considerably affect system costs. Time to get an application to market is affected by the ability to reuse or quickly modify existing designs, and is reduced for a new application if existing hardware can be reconfigured, adapting to changing specifications, and a designer can utilize tools that allow rapid modifications to the design. Performance is key for any algorithm, but for a particular product, one only needs a solution to that particular problem; spending time to make the solution elegant is often a losing strategy.

The neuromorphic community has seen some early entries into commercial spaces, but we are just at the very beginning of the process. As the knowledge of neuromorphic engineering has progressed, which have included knowledge of sensor interfaces and analog signal processing, there have been those who have risen to the opportunities to commercialize these technologies. Neuromorphic research led to better understanding of sensory processing, particularly sensory systems interacting with other humans, enabling companies like Synaptics (touch pads), Foveon (CMOS color imagers), and Sonic Innovation (analog–digital hearing aids); Gilder provides a useful history of these two companies elsewhere (Gilder, 2005). From the early progress in analog signal processing we see companies like GTronix (acquired by National Semiconductor, then acquired by Texas Instruments) applying the impact of custom analog signal processing techniques and programmability toward auditory signal processing that improved sound quality requiring ultra-low power levels. Further, we see in companies like Audience there is some success from mapping the computational flow of the early stage auditory system, and implementing part of the event based auditory front-end to achieve useful results for improved voice quality. But the opportunities for the neuromorphic community are just beginning, and directly related to understanding the computational capabilities of these items. The availability of ICs that have these capabilities, whether or not one mentions they have any neuromorphic material, will further drive applications.

One expects that part of a cortex processing system would have significant computational possibilities, as well as cortex structures from smaller animals, and still be able to reach price points for commercial applications. In the following discussion, we will consider the potential of cortical structures at different levels of commercial applications. Figure 24 shows one typical block diagram, algorithms at each stage, resulting power efficiency (say based on current technology), as well as potential applications of the approach. In all cases, we will be considering a single die solution, typical for a commercial product, and will minimize the resulting communication power to I/O off the chip (no power consumed due to external memories or digital processing devices). We will assume a net computational efficiency of 10 TMAC/mW, corresponding to a lower power supply (i.e., mostly 500 mV, but not 180 mV) and slightly larger load capacitances; we make these assumptions as conservative pull back from possible applications, although we expect the more aggressive targets would be reachable. We assume the external power consumed is set by 1 event/second/neuron average event-rate off chip to a nearby IC. Given the input event rate is hard to predict, we don’t include that power requirement but assume it is handled by the input system. In all of these cases, getting the required computation using only digital techniques in a competitive size, weight, and especially power is hard to foresee.

We expect progress in these neuromorphic systems and that should find applications in traditional signal processing and graphics handling approaches. We will continue to have needs in computing that outpace our available computing resources, particularly at a power consumption required for a particular application. For example, the recent emphasis on cloud computing for academic/research problems shows the incredible need for larger computing resources than those directly available, or even projected to be available, for a portable computing platform (i.e., robotics). Of course a server per computing device is not a computing model that scales well. Given scaling limits on computing, both in power, area, and communication, one can expect to see more and more of these issues going forward.

We expect that a range of different ICs and systems will be built, all at different targets in the market. There are options for even larger networks, or integrating these systems with other processing elements on a chip/board. When moving to larger systems, particularly ones with 10–300 chips (3 × 107 to 109 neurons) or more, one can see utilization of stacking of dies, both decreasing the communication capacitance as well as board complexity. Stacking dies should roughly increase the final chip cost by the number of dies stacked.

In the following subsections, we overview general guidelines to consider when considering using neuromorphic ICs in the commercial market, first for low-cost consumer electronics, and second for a larger neuromorphic processor IC.

I have a casual observation to make. while the authors of the roadmap came to this conclusion “This study concludes that useful neural computation machines based on biological principles at the size of the human brain seems technically within our grasp.,” they’re also leaving themselves some wiggle room because the truth is no one knows if copying a human brain with circuits and various devices will lead to ‘thinking’ as we understand the concept.

For anyone who’s interested, you can search this blog for neuromorphic engineering, artificial brains, and/or memristors as I have many postings on these topics. One of my most recent on the topic of artificial brains is an April 7, 2014 piece titled: Brain-on-a-chip 2014 survey/overview.

One last observation about the movie ‘Transcendence’, has no one else noticed that it’s the ‘Easter’ story with a resurrected and digitized ‘Jesus’?

* Space inserted between ‘brains’ and ‘from’ in head on April 21, 2014.