Category Archives: nanotechnology

Reducing animal testing for nanotoxicity—PETA (People for the Ethical Treatment of Animals) presentation at NanoTox 2014

Writing about nanotechnology can lead you in many different directions such as the news about PETA (People for the Ethical Treatment of Animals) and its poster presentation at the NanoTox 2014 conference being held in Antalya, Turkey from April 23 – 26, 2014. From the April 22, 2014 PETA news release on EurekAlert,

PETA International Science Consortium Ltd.’s nanotechnology expert will present a poster titled “A tiered-testing strategy for nanomaterial hazard assessment” at the 7th International Nanotoxicology Congress [NanoTox 2014] to be held April 23-26, 2014, in Antalya, Turkey.

Dr. Monita Sharma will outline a strategy consistent with the 2007 report from the US National Academy of Sciences, “Toxicity Testing in the 21st Century: A Vision and a Strategy,” which recommends use of non-animal methods involving human cells and cell lines for mechanistic pathway–based toxicity studies.

Based on the current literature, the proposed strategy includes thorough characterization of nanomaterials as manufactured, as intended for use, and as present in the final biological system; assessment using multiple in silico and in vitro model systems, including high-throughput screening (HTS) assays and 3D systems; and data sharing among researchers from government, academia, and industry through web-based tools, such as the Nanomaterial Registry and NanoHUB

Implementation of the proposed strategy will generate meaningful information on nanomaterial properties and their interaction with biological systems. It is cost-effective, reduces animal use, and can be applied for assessing risk and making intelligent regulatory decisions regarding the use and disposal of nanomaterials.

PETA’s International Science Consortium has recently launched a nanotechnology webpage which provides a good overview of the basics and, as one would expect from PETA, a discussion of relevant strategies that eliminate the use of animals in nanotoxicity assessment,

What is nano?

The concept of fabricating materials at an atomic scale was introduced in 1959 by physicist Richard Feynman in his talk entitled “There’s Plenty of Room at the Bottom.” The term “nano” originates from the Greek word for “dwarf,” which represents the very essence of nanomaterials. In the International System of Units, the prefix “nano” means one-billionth, or 10-9; therefore, one nanometer is one-billionth of a meter, which is smaller than the thickness of a sheet of paper or a strand of hair.  …

Are there different kinds of nano?

The possibility of controling biological processes using custom-synthesized materials at the nanoscale has intrigued researchers from different scientific fields. With the ever increasing sophistication of nanomaterial synthesis, there has been an exponential increase in the number and type of nanomaterials available or that can be custom synthesized. Table 1 lists some of the nanomaterials that are currently available.

….

Oddly, given the question ‘Are there different kinds of nano?’, there’s no mention of nanobots.  Still it’s understandable that they’d focus on nanomaterials which are, as far as I know, the only ‘nano’ anything tested for toxicity. On that note, PETA’s Nanotechnology page offers this revelatory listing (scroll down about 3/4 of the way),

The following are some of the web-based tools being used by nanotoxicologists and material scientists:

Getting back to the NanoTox conference being held now in Antalya, I noticed a couple of familiar names on the list of keynote speakers (scroll down about 15% of the way), Kostas Kostarelos (last mentioned in a Feb. 28, 2014 posting about scientific publishing and impact factors’ scroll down about 1/2 way) and Mark Wiesner (last mentioned in a Nov. 13, 2013 posting about a major grant for one of his projects).

Canon-Molecular Imprints deal and its impact on shrinking chips (integrated circuits)

There’s quite an interesting April 20, 2014 essay on Nanotechnology Now which provides some insight into the nanoimprinting market. I recommend reading it but for anyone who is not intimately familiar with the scene, here are a few excerpts along with my attempts to decode this insider’s (from Martini Tech) view,

About two months ago, important news shook the small but lively Japanese nanoimprint community: Canon has decided to acquire, making it a wholly-owned subsidiary, Texas-based Molecular Imprints, a strong player in the nanotechnology industry and one of the main makers of nanoimprint devices such as the Imprio 450 and other models.

So, Canon, a Japanese company, has made a move into the nanoimpriting sector by purchasing Molecular Imprints, a US company based in Texas, outright.

This next part concerns the expiration of Moore’s Law (i.e., every 18 months computer chips get smaller and faster) and is why the major chip makers are searching for new solutions as per the fifth paragraph in this excerpt,

Molecular Imprints` devices are aimed at the IC [integrated circuits, aka chips, I think] patterning market and not just at the relatively smaller applications market to which nanoimprint is usually confined: patterning of bio culture substrates, thin film applications for the solar industry, anti-reflection films for smartphone and LED TV screens, patterning of surfaces for microfluidics among others.

While each one of the markets listed above has the potential of explosive growth in the medium-long term future, at the moment none of them is worth more than a few percentage points, at best, of the IC patterning market.

The mainstream technology behind IC patterning is still optical stepper lithography and the situation is not likely to change in the near term future.

However, optical lithography has its limitations, the main challenge to its 40-year dominance not coming only from technological and engineering issues, but mostly from economical ones.

While from a strictly technological point of view it may still be possible for the major players in the chip industry (Intel, GF, TSMC, Nvidia among others) to go ahead with optical steppers and reach the 5nm node using multi-patterning and immersion, the cost increases associated with each die shrink are becoming staggeringly high.

A top-of-the-notch stepper in the early 90s could have been bought for a few millions of dollars, now the price has increased to some tens of millions for the top machines

The essay describes the market impact this acquisition may have for Canon,

Molecular Imprints has been a company on the forefront of commercialization of nanoimprint-based solutions for IC manufacturing, but so far their solutions have yet to become a viable alternative HVM IC manufacturing market.

The main stumbling blocks for IC patterning using nanoimprint technology are: the occurrence of defects on the mask that inevitably replicates them on each substrate and the lack of alignment precision between the mold and the substrate needed to pattern multi-layered structures.

Therefore, applications for nanoimprint have been limited to markets where no non-periodical structure patterning is needed and where one-layered patterning is sufficient.

But the big market where everyone is aiming for is, of course, IC patterning and this is where much of the R&D effort goes.

While logic patterning with nanoimprint may still be years away, simple patterning of NAND structures may be feasible in the near future, and the purchase of Molecular Imprints by Canon is a step in this direction

Patterning of NAND structures may still require multi-layered structures, but the alignment precision needed is considerably lower than logic.

Moreover, NAND requirements for defectivity are more relaxed than for logic due to the inherent redundancy of the design, therefore, NAND manufacturing is the natural first step for nanoimprint in the IC manufacturing market and, if successful, it may open a whole new range of opportunities for the whole sector.

Assuming I’ve read the rest of this essay rightly, here’s my summary: there are a number of techniques being employed to make chips smaller and more efficient. Canon has purchased a company that is versed in a technique that creates NAND (you can find definitions here) structures in the hope that this technique can be commercialized so that Canon becomes dominant in the sector because (1) they got there first and/or because (2) NAND manufacturing becomes a clear leader, crushing competition from other technologies. This could cover short-term goals and, I imagine Canon hopes, long-term goals.

It was a real treat coming across this essay as it’s an insider’s view. So, thank you to the folks at Martini Tech who wrote this. You can find Molecular Imprints here.

Geckskin update

It appears that researchers at the University of Massachusetts at Amherst have found a way to make their ‘Geckskin’, an adhesive product modeled on a gecko’s feet (a lizard famously able to stick to an object by a single toe), adhere to the widest range of surfaces yet (from an April 17, 2014 University of Massachusetts news release [also on EurekAlert but dated April 18, 2014]),

The ability to stick objects to a wide range of surfaces such as drywall, wood, metal and glass with a single adhesive has been the elusive goal of many research teams across the world, but now a team of University of Massachusetts Amherst inventors describe a new, more versatile version of their invention, Geckskin, that can adhere strongly to a wider range of surfaces, yet releases easily, like a gecko’s feet.

“Imagine sticking your tablet on a wall to watch your favorite movie and then moving it to a new location when you want, without the need for pesky holes in your painted wall,” says polymer science and engineering professor Al Crosby. Geckskin is a ‘gecko-like,’ reusable adhesive device that they had previously demonstrated can hold heavy loads on smooth surfaces such as glass.

‘Geckskin’ first mentioned here in an April 3, 2012 posting features a different approach to mimicking the gecko’s adhesiveness; most teams are focused on the nanoscopic hairs on the gecko’s feet while the researchers at the University of Massachusetts have worked on ‘draping’,

The University of Massachusetts team’s innovation (from the Feb. 17, 2012 news item),

The key innovation by Bartlett and colleagues was to create an integrated adhesive with a soft pad woven into a stiff fabric, which allows the pad to “drape” over a surface to maximize contact. Further, as in natural gecko feet, the skin is woven into a synthetic “tendon,” yielding a design that plays a key role in maintaining stiffness and rotational freedom, the researchers explain.

Importantly, the Geckskin’s adhesive pad uses simple everyday materials such as polydimethylsiloxane (PDMS), which holds promise for developing an inexpensive, strong and durable dry adhesive.

The UMass Amherst researchers are continuing to improve their Geckskin design by drawing on lessons from the evolution of gecko feet, which show remarkable variation in anatomy. “Our design for Geckskin shows the true integrative power of evolution for inspiring synthetic design that can ultimately aid humans in many ways,” says Irschick.

Two years later, the researchers have proved their concept across a range of surfaces (from the 2014 news release),

In Geckskin, the researchers created this ability by combining soft elastomers and ultra-stiff fabrics such as glass or carbon fiber fabrics. By “tuning” the relative stiffness of these materials, they can optimize Geckskin for a range of applications, the inventors say.

To substantiate their claims of Geckskin’s properties, the UMass Amherst team compared three versions to the abilities of a living Tokay gecko on several surfaces, as described in their journal article this month. As predicted by their theory, one Geckskin version matches and even exceeds the gecko’s performance on all tested surfaces.

Irschick points out, “The gecko’s ability to stick to a variety of surfaces is critical for its survival, but it’s equally important to be able to release and re-stick whenever it wants. Geckskin displays the same ability on different commonly used surfaces, opening up great possibilities for new technologies in the home, office or outdoors.”

Here’s a link to and a citation for the paper,

Creating Gecko-Like Adhesives for “Real World” Surfaces by Daniel R. King, Michael D. Bartlett, Casey A. Gilman, Duncan J. Irschick, and Alfred J. Crosby. Advanced Materials. Article first published online: 17 APR 2014 DOI: 10.1002/adma.201306259

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This article is behind a paywall.

The researchers have produced a video (silent) where they demonstrate the Geckskin’s adhesive properties over a number of different surfaces. At seven minutes or so, it runs a bit longer than the videos I embed here but you can find it at http://www.youtube.com/watch?v=SayqhqTZoxI&feature=youtu.be.

The Irish mix up some graphene

There was a lot of excitement (one might almost call it giddiness) earlier this week about a new technique from Irish researchers for producing graphene. From an April 20, 2014 article by Jacob Aron for New Scientist (Note: A link has been removed),

First, pour some graphite powder into a blender. Add water and dishwashing liquid, and mix at high speed. Congratulations, you just made the wonder material graphene.

This surprisingly simple recipe is now the easiest way to mass-produce pure graphene – sheets of carbon just one atom thick. The material has been predicted to revolutionise the electronics industry, based on its unusual electrical and thermal properties. But until now, manufacturing high-quality graphene in large quantities has proved difficult – the best lab techniques manage less than half a gram per hour.

“There are companies producing graphene at much higher rates, but the quality is not exceptional,” says Jonathan Coleman of Trinity College Dublin in Ireland.

Coleman’s team was contracted by Thomas Swan, a chemicals firm based in Consett, UK, to come up with something better. From previous work they knew that it is possible to shear graphene from graphite, the form of carbon found in pencil lead. Graphite is essentially made from sheets of graphene stacked together like a deck of cards, and sliding it in the right way can separate the layers.

Rachel Courtland chimes in with her April 21,2014 post for the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers]) website (Note: A link has been removed),

The first graphene was made by pulling layers off of graphite using Scotch tape. Now, in keeping with the low-tech origins of the material, a team at Trinity College Dublin has found that it should be possible to make large quantities of the stuff by mixing up some graphite and stabilizing detergent with a blender.

The graphene produced in this manner isn’t anything like the wafer-scale sheets of single-layer graphene that are being grown by Samsung, IBM and others for high-performance electronics. Instead, the blender-made variety consists of small flakes that are exfoliated off of bits of graphite and then separated out by centrifuge. But small-scale graphene has its place, the researchers say. …

An April 22, 2014 CRANN (the Centre for Research on Adaptive Nanostructures and Nanodevices) at Trinity College Dublin news release (also on Nanowerk as an April 20, 2014 news item) provides more details about the new technique and about the private/public partnership behind it,

Research team led by Prof Jonathan Coleman discovers new research method to produce large volumes of high quality graphene.

Researchers in AMBER, the Science Foundation Ireland funded materials science centre headquartered at CRANN, Trinity College Dublin have, for the first time, developed a new method of producing industrial quantities of high quality graphene. …

The discovery will change the way many consumer and industrial products are manufactured. The materials will have a multitude of potential applications including advanced food packaging; high strength plastics; foldable touch screens for mobile phones and laptops; super-protective coatings for wind turbines and ships; faster broadband and batteries with dramatically higher capacity than anything available today.

Thomas Swan Ltd. has worked with the AMBER research team for two years and has signed a license agreement to scale up production and make the high quality graphene available to industry globally. The company has already announced two new products as a result of the research discovery (Elicarb®Graphene Powder and Elicarb® Graphene Dispersion).

Until now, researchers have been unable to produce graphene of high quality in large enough quantities. The subject of on-going international research, the research undertaken by AMBER is the first to perfect a large-scale production of pristine graphene materials and has been highlighted by the highly prestigious Nature Materials publication as a global breakthrough. Professor Coleman and his team used a simple method for transforming flakes of graphite into defect-free graphene using commercially available tools, such as high-shear mixers. They demonstrated that not only could graphene-containing liquids be produced in standard lab-scale quantities of a few 100 millilitres, but the process could be scaled up to produce 100s of litres and beyond.

Minister for Research and Innovation Sean Sherlock, TD commented; “Professor Coleman’s discovery shows that Ireland has won the worldwide race on the production of this ‘miracle material’. This is something that USA, China, Australia, UK, Germany and other leading nations have all been striving for and have not yet achieved. This announcement shows how the Irish Government’s strategy of focusing investment in science with impact, as well as encouraging industry and academic collaboration, is working.”

Here’s a link to and a citation for the researchers’ paper,

Scalable production of large quantities of defect-free few-layer graphene by shear exfoliation in liquids by Keith R. Paton, Eswaraiah Varrla, Claudia Backes, Ronan J. Smith, Umar Khan, Arlene O’Neill, Conor Boland, Mustafa Lotya, Oana M. Istrate, Paul King, Tom Higgins, Sebastian Barwich, Peter May, Pawel Puczkarski, Iftikhar Ahmed, Matthias Moebius, Henrik Pettersson, Edmund Long, João Coelho, Sean E. O’Brien, Eva K. McGuire, Beatriz Mendoza Sanchez, Georg S. Duesberg, Niall McEvoy, Timothy J. Pennycook, et al. Nature Materials (2014) doi:10.1038/nmat3944 Published online 20 April 2014

This article is mostly behind a paywall but there is a free preview available through ReadCube Access.

For anyone who’s curious about AMBER, here’s more from the About Us page on the CRANN website (Note: A link has been removed),

In October 2013, a new Science Foundation Ireland funded research centre, AMBER (Advanced Materials and BioEngineering Research) was launched. AMBER is jointly hosted in TCD [Trinity College Dublin] by CRANN and the Trinity Centre for Bioenineering, and works in collaboration with the Royal College of Surgeons in Ireland and UCC. The centre provides a partnership between leading researchers in materials science and industry and will deliver internationally leading research that will be industrially and clinically informed with outputs including new discoveries and devices in ICT, medical device and industrial technology sectors.

Finally, Thomas Swan Ltd. can be found here.

Move over laser—the graphene/carbon nanotube spaser is here, on your t-shirt

This research graphene/carbon nanotube research comes from Australia according to an April 16, 2014 news item on Nanowerk,

A team of researchers from Monash University’s [Australia] Department of Electrical and Computer Systems Engineering (ECSE) has modelled the world’s first spaser …

An April 16, 2014 Monash University news release, which originated the new item, describes the spaser and its relationship to lasers,,

A new version of “spaser” technology being investigated could mean that mobile phones become so small, efficient, and flexible they could be printed on clothing.

A spaser is effectively a nanoscale laser or nanolaser. It emits a beam of light through the vibration of free electrons, rather than the space-consuming electromagnetic wave emission process of a traditional laser.

The news release also provides more details about the graphene/carbon nanotube spaser research and the possibility of turning t-shirts into telephones,

PhD student and lead researcher Chanaka Rupasinghe said the modelled spaser design using carbon would offer many advantages.

“Other spasers designed to date are made of gold or silver nanoparticles and semiconductor quantum dots while our device would be comprised of a graphene resonator and a carbon nanotube gain element,” Chanaka said.

“The use of carbon means our spaser would be more robust and flexible, would operate at high temperatures, and be eco-friendly.

“Because of these properties, there is the possibility that in the future an extremely thin mobile phone could be printed on clothing.”

Spaser-based devices can be used as an alternative to current transistor-based devices such as microprocessors, memory, and displays to overcome current miniaturising and bandwidth limitations.

The researchers chose to develop the spaser using graphene and carbon nanotubes. They are more than a hundred times stronger than steel and can conduct heat and electricity much better than copper. They can also withstand high temperatures.

Their research showed for the first time that graphene and carbon nanotubes can interact and transfer energy to each other through light. These optical interactions are very fast and energy-efficient, and so are suitable for applications such as computer chips.

“Graphene and carbon nanotubes can be used in applications where you need strong, lightweight, conducting, and thermally stable materials due to their outstanding mechanical, electrical and optical properties. They have been tested as nanoscale antennas, electric conductors and waveguides,” Chanaka said.

Chanaka said a spaser generated high-intensity electric fields concentrated into a nanoscale space. These are much stronger than those generated by illuminating metal nanoparticles by a laser in applications such as cancer therapy.

“Scientists have already found ways to guide nanoparticles close to cancer cells. We can move graphene and carbon nanotubes following those techniques and use the high concentrate fields generated through the spasing phenomena to destroy individual cancer cells without harming the healthy cells in the body,” Chanaka said

Here’s a link to and a citation for the paper,

Spaser Made of Graphene and Carbon Nanotubes by Chanaka Rupasinghe, Ivan D. Rukhlenko, and Malin Premaratne. ACS Nano, 2014, 8 (3), pp 2431–2438. DOI: 10.1021/nn406015d Publication Date (Web): February 23, 2014
Copyright © 2014 American Chemical Society

This paper is behind a paywall.

Chiral breathing at the Institute of Physical Chemistry of the Polish Academy of Sciences (IPC PAS)

An April 17, 2014 news item on ScienceDaily highlights some research about a polymer that has some special properties,

Electrically controlled glasses with continuously adjustable transparency, new polarisation filters, and even chemosensors capable of detecting single molecules of specific chemicals could be fabricated thanks to a new polymer unprecedentedly combining optical and electrical properties.

An international team of chemists from Italy, Germany, and Poland developed a polymer with unique optical and electric properties. Components of this polymer change their spatial configuration depending on the electric potential applied. In turn, the polarisation of transmitted light is affected. The material can be used, for instance, in polarisation filters and window glasses with continuously adjustable transparency. Due to its mechanical properties, the polymer is also perfectly suitable for fabrication of chemical sensors for selective detection and determination of optically active (chiral) forms of an analyte.

The research findings of the international team headed by Prof. Francesco Sannicolo from the Universita degli Studi di Milano were recently published in Angewandte Chemie International Edition.

“Until now, to give polymers chiral properties, chiral pendants were attached to the polymer backbone. In such designs the polymer was used as a scaffold only. Our polymer is exceptional, with chirality inherent to it, and with no pending groups. The polymer is both a scaffold and an optically active chiral structure. Moreover, the polymer conducts electricity,” comments Prof. Włodzimierz Kutner from the Institute of Physical Chemistry of the Polish Academy of Sciences (IPC PAS) in Warsaw, one of the initiators of the research.

An April 17, 2014 IPC PAS news release (also on EurrekAlert), which originated the news item, describes chirality and the breathing metaphor with regard to this new polymer,

Chirality can be best explained by referring to mirror reflection. If two varieties of the same object look like their mutual mirror images, they differ in chirality. Human hands provide perhaps the most universal example of chirality, and the difference between the left and right hand becomes obvious if we try to place a left-handed glove on a right hand. The same difference as between the left and right hand is between two chiral molecules with identical chemical composition. Each of them shows different optical properties, and differently rotates plane-polarised light. In such a case, chemists refer to one chemical compound existing as two optical isomers called enantiomers.

The polymer presented by Prof. Sannicolo’s team was developed on the basis of thiophene, an organic compound composed of a five-member aromatic ring containing a sulphur atom. Thiophene polymerisation gives rise to a chemically stable polymer of high conductivity. The basic component of the new polymer – its monomer – is made of a dimer with two halves each made of two thiophene rings and one thianaphthene unit. The halves are connected at a single point and can partially be rotated with respect to each other by applying electric potential. Depending on the orientation of the halves, the new polymer either assumes or looses chirality. This behaviour is fully reversible and resembles a breathing system, whereas the “chiral breathing” is controlled by an external electric potential.

The development of a new polymer was initiated thanks to the research on molecular imprinting pursued at the Institute of Physical Chemistry of the PAS. The research resulted, for instance, in the development of polymers used as recognising units (receptors) in chemosensors, capable of selective capturing of molecules of various analytes, for instance nicotine, and also melamine, an ill-reputed chemical detrimental to human health, used as an additive to falsify protein content in milk and dairy products produced in China.

Generally, molecular imprinting consists in creating template-shaped cavities in polymer matrices with molecules of interest used first as cavity templates. Subsequently these templates are washed out from the polymer. As a result, the polymer contains traps with a shape and size matching those of molecules of the removed template. To be used as a receptor in chemosensor to recognize analyte molecules similar to templates or templates themselves, the polymer imprinted with these cavities must show a sufficient mechanical strength.

“Three-dimensional networks we attempted to build at the IPC PAS using existing two-dimensional thiophene derivatives just collapsed after the template molecules were removed. That’s why we asked for assistance our Italian partners, specialising in the synthesis of thiophene derivatives. The problem was to design and synthesise a three-dimensional thiophene derivative that would allow us for cross-linking of our polymers in three dimensions. The thiophene derivative synthesised in Milan has a stable three-dimensional structure, and the controllable chiral properties of the new polymer obtained after the derivative was polymerised, turned out a nice surprise for all of us”, explains Prof. Kutner.

Here’s a link to and a citation for the paper,

Potential-Driven Chirality Manifestations and Impressive Enantioselectivity by Inherently Chiral Electroactive Organic Films by  Prof. Francesco Sannicolò1, Serena Arnaboldi, Prof. Tiziana Benincori, Dr. Valentina Bonometti, Dr. Roberto Cirilli, Prof. Lothar Dunsch, Prof. Włodzimierz Kutner, Prof. Giovanna Longhi, Prof. Patrizia R. Mussini, Dr. Monica Panigati, Prof. Marco Pierini, and Dr. Simona Rizzo. Angewandte Chemie International Edition Volume 53, Issue 10, pages 2623–2627, March 3, 2014. Article first published online: 5 FEB 2014 DOI: 10.1002/anie.201309585

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This article is behind a paywall.

State-of-the-art biotech and nanophotonics equipment at Québec’s Institut national de la recherche scientifique (INRS)

Canada Foundation for Innovation (a federal government funding agency) has awarded two infrastructure grants to Québec’s Institut national de la recherche scientifique (INRS) or more specifically their Énergie Matériaux Télécommunications Research Centreaccording to an April 18, 2014 news item on Azonano,

Professor Marc André Gauthier and Professor Luca Razzari of the Énergie Matériaux Télécommunications Research Centre have each been awarded large grants from the John R. Evans Leaders Fund of the Canada Foundation for Innovation (CFI) for the acquisition of state-of-the-art biotech and nanophotonics equipment.

To this funding will be added matching grants from the Ministère de l’Enseignement supérieur, de la Recherche, de la Science et de la Technologie (MESRST). These new laboratories will help us develop new approaches for improving health and information technologies, train the next generation of highly qualified high-tech workers, and transfer technology and expertise to local startups.

An April 17, 2014 INRS news release by Gisèle Bolduc, which originated the news item (Pour ceux qui préfèrent l’actualité en français) , provides more details,

Bio-hybrid materials

Professor Gauthier’snew Laboratory of Bio-Hybrid Materials (LBM) will enable him to tackle the numerous challenges of designing these functional materials and make it possible for the biomedical and biotech sectors to take full advantage of their new and unique properties. Professor Gauthier and his team will work on developing new bio organic reactions involving synthetic and natural molecules and improving those that already exist. They will examine the architecture of protein-polymer grafts and develop methods for adjusting the structure and function of bio-hybrid materials in order to evaluate their therapeutic potential.

Plasmonic nanostructures and nonlinear optics

Professor Luca Razzari will use his Laboratory of Nanostructure-Assisted Spectroscopy and Nonlinear Optics (NASNO Lab) to document the properties of plasmonic nanostructures, improve nanospectroscopies and explore new photonic nanodevices. He will also develop new biosensors able to identify very small numbers of biomarkers. This may have an important impact in the early diagnosis of several diseases such as cancer and life-threatening infectious diseases.Besides this, he will investigate a new generation of nanoplasmonic devices for information and communications technology applications.

Congratulations!

Iran’s work on turmeric (curcumin) as an anti-cancer drug

It’s been a while since I’ve mentioned either Iran or curcumin (a constituent of turmeric) but an April 15, 2014 news item on Nanowerk has given me an opportunity to do both,

Nanotechnology researchers from Tarbiat Modarres University [Iran] produced a new drug capable of detecting and removing cancer cells using turmeric …

The compound is made of curcumin found in the extract of turmeric, and has desirable physical and chemical stability and prevents the proliferation of cancer cells.

An April 16, 2014 Iran Nanotechnology Initiative Council (INIC) news release, which despite its date appears to have originated the news item, fills in details about the research,

In this drug, curcumin with high efficiency (approximately 87%) was loaded in the polymeric nanocarrier, and it created a spherical structure with the size of 140 nm. The drug has high physical and chemical stability. The drug was used successfully in laboratory conditions in the treatment of a type of aggressive tumor in the central nervous system, called glioblastoma (GBM).

The interesting point is that the fatal effect of nanocurcumin on mature stem cells derived from marrow and natural cells of skin fibroblast is observed at a concentration higher than a concentration that is effective on cancer cells. In other words, no fatal effect on natural cells is observed at concentrations that are fatal to cancer cells. It shows that curcumin prefers to enter cancer cells.

The size range of the nanocarrier used in this research is 15-100 nm. Physical and chemical stability, non-toxicity, and biodegradability are among the main characteristics of the nanocarriers. Based on the results, the nanocarrier used in this research has no toxic effect on cells. In other words, all the death in the cells is caused by curcumin, and dendrosome only results in bioavailability and transference of the drug into the cells.

“The drug has the potential to affect a number of message delivery paths in the cells, one of which is cell proliferation path. Therefore, the drug prefers to enter cancer cells rather than various types of natural cells,” the researchers said.

Here’s a link to and a citation for the paper,

Dendrosomal curcumin nanoformulation downregulates pluripotency genes via miR-145 activation in U87MG glioblastoma cells by Maryam Tahmasebi Mirgani, Benedetta Isacchi, Majid Sadeghizadeh, Fabio Marra, Anna Rita Bilia, Seyed Javad Mowla, Farhood Najafi, & Esmael Babaei. International Journal of Nanomedicine, vol. 9, issue 1, January 2014, pp. 403-417.DOI: http://dx.doi.org/10.2147/IJN.S48136

This is an open access paper.

I last wrote about turmeric or more specifically curcumin in a December 25, 2011 posting about research at UCLA (University of California at Los Angeles).

Earth Day, Water Day, and every day

I’m blaming my confusion on the American Chemical Society (ACS) which seemed to be celebrating Earth Day on April 15, 2014 as per its news release highlighting their “Chemists Celebrate Earth Day” video series  while in Vancouver, Canada, we’re celebrating it on April 26, 2014 and elsewhere it seems to be on April 20, this year. Regardless, here’s more about how chemist’s are celebrating from the ACS news release,

Water is arguably the most important resource on the planet. In celebration of Earth Day, the American Chemical Society (ACS) is showcasing three scientists whose research keeps water safe, clean and available for future generations. Geared toward elementary and middle school students, the “Chemists Celebrate Earth Day” series highlights the important work that chemists and chemical engineers do every day. The videos are available at http://bit.ly/CCED2014.

The series focuses on the following subjects:

  • Transforming Tech Toys- Featuring Aydogan Ozcan, Ph.D., of UCLA: Ozcan takes everyday gadgets and turns them into powerful mobile laboratories. He’s made a cell phone into a blood analyzer and a bacteria detector, and now he’s built a device that turns a cell phone into a water tester. It can detect very harmful mercury even at very low levels.
  • All About Droughts - Featuring Collins Balcombe of the U.S. Bureau of Reclamation: Balcombe’s job is to keep your drinking water safe and to find new ways to re-use the water that we flush away everyday so that it doesn’t go to waste, especially in areas that don’t get much rain.
  • Cleaning Up Our Water – Featuring Anne Morrissey, Ph.D., of Dublin City University: We all take medicines, but did you know that sometimes the medicine doesn’t stay in our bodies? It’s up to Anne Morrissey to figure out how to get potentially harmful pharmaceuticals out of the water supply, and she’s doing it using one of the most plentiful things on the planet: sunlight.

Sadly, I missed marking World Water Day which according to a March 21, 2014 news release I received was being celebrated on Saturday, March 22, 2014 with worldwide events and the release of a new UN report,

World Water Day: UN Stresses Water and Energy Issues 

Tokyo Leads Public Celebrations Around the World

Tokyo — March 21 — The deep-rooted relationships between water and energy were highlighted today during main global celebrations in Tokyo marking the United Nations’ annual World Water Day.

“Water and energy are among the world’s most pre-eminent challenges. This year’s focus of World Water Day brings these issues to the attention of the world,” said Michel Jarraud, Secretary-General of the World Meteorological Organization and Chair of UN-Water, which coordinates World Water Day and freshwater-related efforts UN system-wide.

The UN predicts that by 2030 the global population will need 35% more food, 40% more water and 50% more energy. Already today 768 million people lack access to improved water sources, 2.5 billion people have no improved sanitation and 1.3 billion people cannot access electricity.

“These issues need urgent attention – both now and in the post-2015 development discussions. The situation is unacceptable. It is often the same people who lack access to water and sanitation who also lack access to energy, ” said Mr. Jarraud.

The 2014 World Water Development Report (WWDR) – a UN-Water flagship report, produced and coordinated by the World Water Assessment Programme, which is hosted and led by UNESCO – is released on World Water Day as an authoritative status report on global freshwater resources. It highlights the need for policies and regulatory frameworks that recognize and integrate approaches to water and energy priorities.

WWDR, a triennial report from 2003 to 2012, this year becomes an annual edition, responding to the international community’s expression of interest in a concise, evidence-based and yearly publication with a specific thematic focus and recommendations.

WWDR 2014 underlines how water-related issues and choices impact energy and vice versa. For example: drought diminishes energy production, while lack of access to electricity limits irrigation possibilities.

The report notes that roughly 75% of all industrial water withdrawals are used for energy production. Tariffs also illustrate this interdependence: if water is subsidized to sell below cost (as is often the case), energy producers – major water consumers – are less likely to conserve it.  Energy subsidies, in turn, drive up water usage.

The report stresses the imperative of coordinating political governance and ensuring that water and energy prices reflect real costs and environmental impacts.

“Energy and water are at the top of the global development agenda,” said the Rector of United Nations University, David Malone, this year’s coordinator of World Water Day on behalf of UN-Water together with the United Nations Industrial Development Organization (UNIDO).

“Significant policy gaps exist in this nexus at present, and the UN plays an instrumental role in providing evidence and policy-relevant guidance. Through this day, we seek to inform decision-makers, stakeholders and practitioners about the interlinkages, potential synergies and trade-offs, and highlight the need for appropriate responses and regulatory frameworks that account for both water and energy priorities. From UNU’s perspective, it is essential that we stimulate more debate and interactive dialogue around possible solutions to our energy and water challenges.”

UNIDO Director-General LI Yong, emphasized the importance of water and energy for inclusive and sustainable industrial development.

“There is a strong call today for integrating the economic dimension, and the role of industry and manufacturing in particular, into the global post-2015 development priorities. Experience shows that environmentally sound interventions in manufacturing industries can be highly effective and can significantly reduce environmental degradation. I am convinced that inclusive and sustainable industrial development will be a key driver for the successful integration of the economic, social and environmental dimensions,” said Mr. LI.

Rather unusually, Michael Bergerrecently published two Nanowerk Spotlight articles about water (is there theme, anyone?) within 24 hours of each other. In his March 26, 2014 Spotlight article, Michael Berger focuses on graphene and water remediation (Note: Links have been removed),

The unique properties of nanomaterials are beneficial in applications to remove pollutants from the environment. The extremely small size of nanomaterial particles creates a large surface area in relation to their volume, which makes them highly reactive, compared to non-nano forms of the same materials.

The potential impact areas for nanotechnology in water applications are divided into three categories: treatment and remediation; sensing and detection: and pollution prevention (read more: “Nanotechnology and water treatment”).

Silver, iron, gold, titanium oxides and iron oxides are some of the commonly used nanoscale metals and metal oxides cited by the researchers that can be used in environmental remediation (read more: “Overview of nanomaterials for cleaning up the environment”).

A more recent entrant into this nanomaterial arsenal is graphene. Individual graphene sheets and their functionalized derivatives have been used to remove metal ions and organic pollutants from water. These graphene-based nanomaterials show quite high adsorption performance as adsorbents. However they also cause additional cost because the removal of these adsorbent materials after usage is difficult and there is the risk of secondary environmental pollution unless the nanomaterials are collected completely after usage.

One solution to this problem would be the assembly of individual sheets into three-dimensional (3D) macroscopic structures which would preserve the unique properties of individual graphene sheets, and offer easy collecting and recycling after water remediation.

The March 27, 2014 Nanowerk Spotlight article was written by someone at Alberta’s (Canada) Ingenuity Lab and focuses on their ‘nanobiological’ approach to water remediation (Note: Links have been removed),

At Ingenuity Lab in Edmonton, Alberta, Dr. Carlo Montemagno and a team of world-class researchers have been investigating plausible solutions to existing water purification challenges. They are building on Dr. Montemagno’s earlier patented discoveries by using a naturally-existing water channel protein as the functional unit in water purification membranes [4].

Aquaporins are water-transport proteins that play an important osmoregulation role in living organisms [5]. These proteins boast exceptionally high water permeability (~ 1010 water molecules/s), high selectivity for pure water molecules, and a low energy cost, which make aquaporin-embedded membrane well suited as an alternative to conventional RO membranes.

Unlike synthetic polymeric membranes, which are driven by the high pressure-induced diffusion of water through size selective pores, this technology utilizes the biological osmosis mechanism to control the flow of water in cellular systems at low energy. In nature, the direction of osmotic water flow is determined by the osmotic pressure difference between compartments, i.e. water flows toward higher osmotic pressure compartment (salty solution or contaminated water). This direction can however be reversed by applying a pressure to the salty solution (i.e., RO).

The principle of RO is based on the semipermeable characteristics of the separating membrane, which allows the transport of only water molecules depending on the direction of osmotic gradient. Therefore, as envisioned in the recent publication (“Recent Progress in Advanced Nanobiological Materials for Energy and Environmental Applications”), the core of Ingenuity Lab’s approach is to control the direction of water flow through aquaporin channels with a minimum level of pressure and to use aquaporin-embedded biomimetic membranes as an alternative to conventional RO membranes.

Here’s a link to and a citation for Montemagno’s and his colleague’s paper,

Recent Progress in Advanced Nanobiological Materials for Energy and Environmental Applications by Hyo-Jick Choi and Carlo D. Montemagno. Materials 2013, 6(12), 5821-5856; doi:10.3390/ma6125821

This paper is open access.

Returning to where I started, here’s a water video featuring graphene from the ACS celebration of Earth Day 2014,

Happy Earth Day!

Roadmap to neuromorphic engineering digital and analog) for the creation of artificial brains *from the Georgia (US) Institute of Technology

While I didn’t mention neuromorphic engineering in my April 16, 2014 posting which focused on the more general aspect of nanotechnology in Transcendence, a movie starring Johnny Depp and opening on April 18, that specialty (neuromorphic engineering) is what makes the events in the movie ‘possible’ (assuming very large stretches of imagination bringing us into the realm implausibility and beyond). From the IMDB.com plot synopsis for Transcendence,

Dr. Will Caster (Johnny Depp) is the foremost researcher in the field of Artificial Intelligence, working to create a sentient machine that combines the collective intelligence of everything ever known with the full range of human emotions. His highly controversial experiments have made him famous, but they have also made him the prime target of anti-technology extremists who will do whatever it takes to stop him. However, in their attempt to destroy Will, they inadvertently become the catalyst for him to succeed to be a participant in his own transcendence. For his wife Evelyn (Rebecca Hall) and best friend Max Waters (Paul Bettany), both fellow researchers, the question is not if they canbut [sic] if they should. Their worst fears are realized as Will’s thirst for knowledge evolves into a seemingly omnipresent quest for power, to what end is unknown. The only thing that is becoming terrifyingly clear is there may be no way to stop him.

In the film, Carter’s intelligence/consciousness is uploaded to the computer, which suggests the computer has human brainlike qualities and abilities. The effort to make computer or artificial intelligence more humanlike is called neuromorphic engineering and according to an April 17, 2014 news item on phys.org, researchers at the Georgia Institute of Technology (Georgia Tech) have published a roadmap for this pursuit,

In the field of neuromorphic engineering, researchers study computing techniques that could someday mimic human cognition. Electrical engineers at the Georgia Institute of Technology recently published a “roadmap” that details innovative analog-based techniques that could make it possible to build a practical neuromorphic computer.

A core technological hurdle in this field involves the electrical power requirements of computing hardware. Although a human brain functions on a mere 20 watts of electrical energy, a digital computer that could approximate human cognitive abilities would require tens of thousands of integrated circuits (chips) and a hundred thousand watts of electricity or more – levels that exceed practical limits.

The Georgia Tech roadmap proposes a solution based on analog computing techniques, which require far less electrical power than traditional digital computing. The more efficient analog approach would help solve the daunting cooling and cost problems that presently make digital neuromorphic hardware systems impractical.

“To simulate the human brain, the eventual goal would be large-scale neuromorphic systems that could offer a great deal of computational power, robustness and performance,” said Jennifer Hasler, a professor in the Georgia Tech School of Electrical and Computer Engineering (ECE), who is a pioneer in using analog techniques for neuromorphic computing. “A configurable analog-digital system can be expected to have a power efficiency improvement of up to 10,000 times compared to an all-digital system.”

An April 16, 2014 Georgia Tech news release by Rick Robinson, which originated the news item, describes why Hasler wants to combine analog (based on biological principles) and digital computing approaches to the creation of artificial brains,

Unlike digital computing, in which computers can address many different applications by processing different software programs, analog circuits have traditionally been hard-wired to address a single application. For example, cell phones use energy-efficient analog circuits for a number of specific functions, including capturing the user’s voice, amplifying incoming voice signals, and controlling battery power.

Because analog devices do not have to process binary codes as digital computers do, their performance can be both faster and much less power hungry. Yet traditional analog circuits are limited because they’re built for a specific application, such as processing signals or controlling power. They don’t have the flexibility of digital devices that can process software, and they’re vulnerable to signal disturbance issues, or noise.

In recent years, Hasler has developed a new approach to analog computing, in which silicon-based analog integrated circuits take over many of the functions now performed by familiar digital integrated circuits. These analog chips can be quickly reconfigured to provide a range of processing capabilities, in a manner that resembles conventional digital techniques in some ways.

Over the last several years, Hasler and her research group have developed devices called field programmable analog arrays (FPAA). Like field programmable gate arrays (FPGA), which are digital integrated circuits that are ubiquitous in modern computing, the FPAA can be reconfigured after it’s manufactured – hence the phrase “field-programmable.”

Hasler and Marr’s 29-page paper traces a development process that could lead to the goal of reproducing human-brain complexity. The researchers investigate in detail a number of intermediate steps that would build on one another, helping researchers advance the technology sequentially.

For example, the researchers discuss ways to scale energy efficiency, performance and size in order to eventually achieve large-scale neuromorphic systems. The authors also address how the implementation and the application space of neuromorphic systems can be expected to evolve over time.

“A major concept here is that we have to first build smaller systems capable of a simple representation of one layer of human brain cortex,” Hasler said. “When that system has been successfully demonstrated, we can then replicate it in ways that increase its complexity and performance.”

Among neuromorphic computing’s major hurdles are the communication issues involved in networking integrated circuits in ways that could replicate human cognition. In their paper, Hasler and Marr emphasize local interconnectivity to reduce complexity. Moreover, they argue it’s possible to achieve these capabilities via purely silicon-based techniques, without relying on novel devices that are based on other approaches.

Commenting on the recent publication, Alice C. Parker, a professor of electrical engineering at the University of Southern California, said, “Professor Hasler’s technology roadmap is the first deep analysis of the prospects for large scale neuromorphic intelligent systems, clearly providing practical guidance for such systems, with a nearer-term perspective than our whole-brain emulation predictions. Her expertise in analog circuits, technology and device models positions her to provide this unique perspective on neuromorphic circuits.”

Eugenio Culurciello, an associate professor of biomedical engineering at Purdue University, commented, “I find this paper to be a very accurate description of the field of neuromorphic data processing systems. Hasler’s devices provide some of the best performance per unit power I have ever seen and are surely on the roadmap for one of the major technologies of the future.”

Said Hasler: “In this study, we conclude that useful neural computation machines based on biological principles – and potentially at the size of the human brain — seems technically within our grasp. We think that it’s more a question of gathering the right research teams and finding the funding for research and development than of any insurmountable technical barriers.”

Here’s a link to and a citation for the roadmap,

Finding a roadmap to achieve large neuromorphic hardware systems by Jennifer Hasler and Bo Marr.  Front. Neurosci. (Frontiers in Neuroscience), 10 September 2013 | doi: 10.3389/fnins.2013.00118

This is an open access article (at least, the HTML version is).

I have looked at Hasler’s roadmap and it provides a good and readable overview (even for an amateur like me; Note: you do have to need some tolerance for ‘not knowing’) of the state of neuromorphic engineering’s problems, and suggestions for overcoming them. Here’s a description of a human brain and its power requirements as compared to a computer’s (from the roadmap),

One of the amazing thing about the human brain is its ability to perform tasks beyond current supercomputers using roughly 20 W of average power, a level smaller than most individual computer microprocessor chips. A single neuron emulation can tax a high performance processor; given there is 1012 neurons operating at 20 W, each neuron consumes 20 pW average power. Assuming a neuron is conservatively performing the wordspotting computation (1000 synapses), 100,000 PMAC (PMAC = “Peta” MAC = 1015 MAC/s) would be required to duplicate the neural structure. A higher computational efficiency due to active dendritic line channels is expected as well as additional computation due to learning. The efficiency of a single neuron would be 5000 PMAC/W (or 5 TMAC/μW). A similar efficiency for 1011 neurons and 10,000 synapses is expected.

Building neuromorphic hardware requires that technology must scale from current levels given constraints of power, area, and cost: all issues typical in industrial and defense applications; if hardware technology does not scale as other available technologies, as well as takes advantage of the capabilities of IC technology that are currently visible, it will not be successful.

One of my main areas of interest is the memristor (a nanoscale ‘device/circuit element’ which emulates synaptic plasticity), which was mentioned in a way that allows me to understand how the device fits (or doesn’t fit) into the overall conceptual framework (from the roadmap),

The density for a 10 nm EEPROM device acting as a synapse begs the question of whether other nanotechnologies can improve on the resulting Si [silicon] synapse density. One transistor per synapse is hard to beat by any approach, particularly in scaled down Si (like 10 nm), when the synapse memory, computation, and update is contained within the EEPROM device. Most nano device technologies [i.e., memristors (Snider et al., 2011)] show considerable difficulties to get to two-dimensional arrays at a similar density level. Recently, a team from U. of Michigan announced the first functioning memristor two-dimensional (30 × 30) array built on a CMOS chip in 2012 (Kim et al., 2012), claiming applications in neuromorphic engineering, the same group has published innovative devices for digital (Jo and Lu, 2009) and analog applications (Jo et al., 2011).

I notice that the reference to the University’s of Michigan is relatively neutral in tone and the memristor does not figure substantively in Hasler’s roadmap.

Intriguingly, there is a section on commercialization; I didn’t think the research was at that stage yet (from the roadmap),

Although one can discuss how to build a cortical computer on the size of mammals and humans, the question is how will the technology developed for these large systems impact commercial development. The cost for ICs [integrated circuits or chips] alone for cortex would be approximately $20 M in current prices, which although possible for large users, would not be common to be found in individual households. Throughout the digital processor approach, commercial market opportunities have driven the progress in the field. Getting neuromorphic technology integrated into commercial environment allows us to ride this powerful economic “engine” rather than pull.

In most applications, the important commercial issues include minimization of cost, time to market, just sufficient performance for the application, power consumed, size and weight. The cost of a system built from ICs is, at a macro-level, a function of the area of those ICs, which then affects the number of ICs needed system wide, the number of components used, and the board space used. Efficiency of design tools, testing time and programming time also considerably affect system costs. Time to get an application to market is affected by the ability to reuse or quickly modify existing designs, and is reduced for a new application if existing hardware can be reconfigured, adapting to changing specifications, and a designer can utilize tools that allow rapid modifications to the design. Performance is key for any algorithm, but for a particular product, one only needs a solution to that particular problem; spending time to make the solution elegant is often a losing strategy.

The neuromorphic community has seen some early entries into commercial spaces, but we are just at the very beginning of the process. As the knowledge of neuromorphic engineering has progressed, which have included knowledge of sensor interfaces and analog signal processing, there have been those who have risen to the opportunities to commercialize these technologies. Neuromorphic research led to better understanding of sensory processing, particularly sensory systems interacting with other humans, enabling companies like Synaptics (touch pads), Foveon (CMOS color imagers), and Sonic Innovation (analog–digital hearing aids); Gilder provides a useful history of these two companies elsewhere (Gilder, 2005). From the early progress in analog signal processing we see companies like GTronix (acquired by National Semiconductor, then acquired by Texas Instruments) applying the impact of custom analog signal processing techniques and programmability toward auditory signal processing that improved sound quality requiring ultra-low power levels. Further, we see in companies like Audience there is some success from mapping the computational flow of the early stage auditory system, and implementing part of the event based auditory front-end to achieve useful results for improved voice quality. But the opportunities for the neuromorphic community are just beginning, and directly related to understanding the computational capabilities of these items. The availability of ICs that have these capabilities, whether or not one mentions they have any neuromorphic material, will further drive applications.

One expects that part of a cortex processing system would have significant computational possibilities, as well as cortex structures from smaller animals, and still be able to reach price points for commercial applications. In the following discussion, we will consider the potential of cortical structures at different levels of commercial applications. Figure 24 shows one typical block diagram, algorithms at each stage, resulting power efficiency (say based on current technology), as well as potential applications of the approach. In all cases, we will be considering a single die solution, typical for a commercial product, and will minimize the resulting communication power to I/O off the chip (no power consumed due to external memories or digital processing devices). We will assume a net computational efficiency of 10 TMAC/mW, corresponding to a lower power supply (i.e., mostly 500 mV, but not 180 mV) and slightly larger load capacitances; we make these assumptions as conservative pull back from possible applications, although we expect the more aggressive targets would be reachable. We assume the external power consumed is set by 1 event/second/neuron average event-rate off chip to a nearby IC. Given the input event rate is hard to predict, we don’t include that power requirement but assume it is handled by the input system. In all of these cases, getting the required computation using only digital techniques in a competitive size, weight, and especially power is hard to foresee.

We expect progress in these neuromorphic systems and that should find applications in traditional signal processing and graphics handling approaches. We will continue to have needs in computing that outpace our available computing resources, particularly at a power consumption required for a particular application. For example, the recent emphasis on cloud computing for academic/research problems shows the incredible need for larger computing resources than those directly available, or even projected to be available, for a portable computing platform (i.e., robotics). Of course a server per computing device is not a computing model that scales well. Given scaling limits on computing, both in power, area, and communication, one can expect to see more and more of these issues going forward.

We expect that a range of different ICs and systems will be built, all at different targets in the market. There are options for even larger networks, or integrating these systems with other processing elements on a chip/board. When moving to larger systems, particularly ones with 10–300 chips (3 × 107 to 109 neurons) or more, one can see utilization of stacking of dies, both decreasing the communication capacitance as well as board complexity. Stacking dies should roughly increase the final chip cost by the number of dies stacked.

In the following subsections, we overview general guidelines to consider when considering using neuromorphic ICs in the commercial market, first for low-cost consumer electronics, and second for a larger neuromorphic processor IC.

I have a casual observation to make. while the authors of the roadmap came to this conclusion “This study concludes that useful neural computation machines based on biological principles at the size of the human brain seems technically within our grasp.,” they’re also leaving themselves some wiggle room because the truth is no one knows if copying a human brain with circuits and various devices will lead to ‘thinking’ as we understand the concept.

For anyone who’s interested, you can search this blog for neuromorphic engineering, artificial brains, and/or memristors as I have many postings on these topics. One of my most recent on the topic of artificial brains is an April 7, 2014 piece titled: Brain-on-a-chip 2014 survey/overview.

One last observation about the movie ‘Transcendence’, has no one else noticed that it’s the ‘Easter’ story with a resurrected and digitized ‘Jesus’?

* Space inserted between ‘brains’ and ‘from’ in head on April 21, 2014.