Tag Archives: Stanford University

3-D integration of nanotechnologies on a single computer chip

By integrating nanomaterials , a new technique for a 3D computer chip capable of handling today’s huge amount of data has been developed. Weirdly, the first two paragraphs of a July 5, 2017 news item on Nanowerk do not convey the main point (Note: A link has been removed),

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature (“Three-dimensional integration of nanotechnologies for computing and data storage on a single chip”), by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

This image helps to convey the main points,

Instead of relying on silicon-based devices, a new chip uses carbon nanotubes and resistive random-access memory (RRAM) cells. The two are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. Courtesy MIT

As I hove been quite impressed with their science writing, it was a bit surprising to find that the Massachusetts Institute of Technology (MIT) had issued this news release (news item) as it didn’t follow the ‘rules’, i.e., cover as many of the journalistic questions (Who, What, Where, When, Why, and, sometimes, How) as possible in the first sentence/paragraph. This is written more in the style of a magazine article and so the details take a while to emerge, from a July 5, 2017 MIT news release, which originated the news item,

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.

This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.

“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.

“The new 3-D computer architecture provides dense and fine-grained integration of computating and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”

To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.

Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

“It leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” Rabaey says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

“One big advantage of our demonstration is that it is compatible with today’s silicon infrastructure, both in terms of fabrication and design,” says Howe.

“The fact that this strategy is both CMOS [complementary metal-oxide-semiconductor] compatible and viable for a variety of applications suggests that it is a significant step in the continued advancement of Moore’s Law,” says Ken Hansen, president and CEO of the Semiconductor Research Corporation, which supported the research. “To sustain the promise of Moore’s Law economics, innovative heterogeneous approaches are required as dimensional scaling is no longer sufficient. This pioneering work embodies that philosophy.”

The team is working to improve the underlying nanotechnologies, while exploring the new 3-D computer architecture. For Shulaker, the next step is working with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system that take advantage of its ability to carry out sensing and data processing on the same chip.

So, for example, the devices could be used to detect signs of disease by sensing particular compounds in a patient’s breath, says Shulaker.

“The technology could not only improve traditional computing, but it also opens up a whole new range of applications that we can target,” he says. “My students are now investigating how we can produce chips that do more than just computing.”

“This demonstration of the 3-D integration of sensors, memory, and logic is an exceptionally innovative development that leverages current CMOS technology with the new capabilities of carbon nanotube field–effect transistors,” says Sam Fuller, CTO emeritus of Analog Devices, who was not involved in the research. “This has the potential to be the platform for many revolutionary applications in the future.”

This work was funded by the Defense Advanced Research Projects Agency [DARPA], the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

Here’s a link to and a citation for the paper,

Three-dimensional integration of nanotechnologies for computing and data storage on a single chip by Max M. Shulaker, Gage Hills, Rebecca S. Park, Roger T. Howe, Krishna Saraswat, H.-S. Philip Wong, & Subhasish Mitra. Nature 547, 74–78 (06 July 2017) doi:10.1038/nature22994 Published online 05 July 2017

This paper is behind a paywall.

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neil’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.

Gamechanging electronics with new ultrafast, flexible, and transparent electronics

There are two news bits about game-changing electronics, one from the UK and the other from the US.

United Kingdom (UK)

An April 3, 2017 news item on Azonano announces the possibility of a future golden age of electronics courtesy of the University of Exeter,

Engineering experts from the University of Exeter have come up with a breakthrough way to create the smallest, quickest, highest-capacity memories for transparent and flexible applications that could lead to a future golden age of electronics.

A March 31, 2017 University of Exeter press release (also on EurekAlert), which originated the news item, expands on the theme (Note: Links have been removed),

Engineering experts from the University of Exeter have developed innovative new memory using a hybrid of graphene oxide and titanium oxide. Their devices are low cost and eco-friendly to produce, are also perfectly suited for use in flexible electronic devices such as ‘bendable’ mobile phone, computer and television screens, and even ‘intelligent’ clothing.

Crucially, these devices may also have the potential to offer a cheaper and more adaptable alternative to ‘flash memory’, which is currently used in many common devices such as memory cards, graphics cards and USB computer drives.

The research team insist that these innovative new devices have the potential to revolutionise not only how data is stored, but also take flexible electronics to a new age in terms of speed, efficiency and power.

Professor David Wright, an Electronic Engineering expert from the University of Exeter and lead author of the paper said: “Using graphene oxide to produce memory devices has been reported before, but they were typically very large, slow, and aimed at the ‘cheap and cheerful’ end of the electronics goods market.

“Our hybrid graphene oxide-titanium oxide memory is, in contrast, just 50 nanometres long and 8 nanometres thick and can be written to and read from in less than five nanoseconds – with one nanometre being one billionth of a metre and one nanosecond a billionth of a second.”

Professor Craciun, a co-author of the work, added: “Being able to improve data storage is the backbone of tomorrow’s knowledge economy, as well as industry on a global scale. Our work offers the opportunity to completely transform graphene-oxide memory technology, and the potential and possibilities it offers.”

Here’s a link to and a citation for the paper,

Multilevel Ultrafast Flexible Nanoscale Nonvolatile Hybrid Graphene Oxide–Titanium Oxide Memories by V. Karthik Nagareddy, Matthew D. Barnes, Federico Zipoli, Khue T. Lai, Arseny M. Alexeev, Monica Felicia Craciun, and C. David Wright. ACS Nano, 2017, 11 (3), pp 3010–3021 DOI: 10.1021/acsnano.6b08668 Publication Date (Web): February 21, 2017

Copyright © 2017 American Chemical Society

This paper appears to be open access.

United States (US)

Researchers from Stanford University have developed flexible, biodegradable electronics.

A newly developed flexible, biodegradable semiconductor developed by Stanford engineers shown on a human hair. (Image credit: Bao lab)

A human hair? That’s amazing and this May 3, 2017 news item on Nanowerk reveals more,

As electronics become increasingly pervasive in our lives – from smart phones to wearable sensors – so too does the ever rising amount of electronic waste they create. A United Nations Environment Program report found that almost 50 million tons of electronic waste were thrown out in 2017–more than 20 percent higher than waste in 2015.

Troubled by this mounting waste, Stanford engineer Zhenan Bao and her team are rethinking electronics. “In my group, we have been trying to mimic the function of human skin to think about how to develop future electronic devices,” Bao said. She described how skin is stretchable, self-healable and also biodegradable – an attractive list of characteristics for electronics. “We have achieved the first two [flexible and self-healing], so the biodegradability was something we wanted to tackle.”

The team created a flexible electronic device that can easily degrade just by adding a weak acid like vinegar. The results were published in the Proceedings of the National Academy of Sciences (“Biocompatible and totally disintegrable semiconducting polymer for ultrathin and ultralightweight transient electronics”).

“This is the first example of a semiconductive polymer that can decompose,” said lead author Ting Lei, a postdoctoral fellow working with Bao.

A May 1, 2017 Stanford University news release by Sarah Derouin, which originated the news item, provides more detail,

In addition to the polymer – essentially a flexible, conductive plastic – the team developed a degradable electronic circuit and a new biodegradable substrate material for mounting the electrical components. This substrate supports the electrical components, flexing and molding to rough and smooth surfaces alike. When the electronic device is no longer needed, the whole thing can biodegrade into nontoxic components.

Biodegradable bits

Bao, a professor of chemical engineering and materials science and engineering, had previously created a stretchable electrode modeled on human skin. That material could bend and twist in a way that could allow it to interface with the skin or brain, but it couldn’t degrade. That limited its application for implantable devices and – important to Bao – contributed to waste.

Flexible, biodegradable semiconductor on an avacado

The flexible semiconductor can adhere to smooth or rough surfaces and biodegrade to nontoxic products. (Image credit: Bao lab)

Bao said that creating a robust material that is both a good electrical conductor and biodegradable was a challenge, considering traditional polymer chemistry. “We have been trying to think how we can achieve both great electronic property but also have the biodegradability,” Bao said.

Eventually, the team found that by tweaking the chemical structure of the flexible material it would break apart under mild stressors. “We came up with an idea of making these molecules using a special type of chemical linkage that can retain the ability for the electron to smoothly transport along the molecule,” Bao said. “But also this chemical bond is sensitive to weak acid – even weaker than pure vinegar.” The result was a material that could carry an electronic signal but break down without requiring extreme measures.

In addition to the biodegradable polymer, the team developed a new type of electrical component and a substrate material that attaches to the entire electronic component. Electronic components are usually made of gold. But for this device, the researchers crafted components from iron. Bao noted that iron is a very environmentally friendly product and is nontoxic to humans.

The researchers created the substrate, which carries the electronic circuit and the polymer, from cellulose. Cellulose is the same substance that makes up paper. But unlike paper, the team altered cellulose fibers so the “paper” is transparent and flexible, while still breaking down easily. The thin film substrate allows the electronics to be worn on the skin or even implanted inside the body.

From implants to plants

The combination of a biodegradable conductive polymer and substrate makes the electronic device useful in a plethora of settings – from wearable electronics to large-scale environmental surveys with sensor dusts.

“We envision these soft patches that are very thin and conformable to the skin that can measure blood pressure, glucose value, sweat content,” Bao said. A person could wear a specifically designed patch for a day or week, then download the data. According to Bao, this short-term use of disposable electronics seems a perfect fit for a degradable, flexible design.

And it’s not just for skin surveys: the biodegradable substrate, polymers and iron electrodes make the entire component compatible with insertion into the human body. The polymer breaks down to product concentrations much lower than the published acceptable levels found in drinking water. Although the polymer was found to be biocompatible, Bao said that more studies would need to be done before implants are a regular occurrence.

Biodegradable electronics have the potential to go far beyond collecting heart disease and glucose data. These components could be used in places where surveys cover large areas in remote locations. Lei described a research scenario where biodegradable electronics are dropped by airplane over a forest to survey the landscape. “It’s a very large area and very hard for people to spread the sensors,” he said. “Also, if you spread the sensors, it’s very hard to gather them back. You don’t want to contaminate the environment so we need something that can be decomposed.” Instead of plastic littering the forest floor, the sensors would biodegrade away.

As the number of electronics increase, biodegradability will become more important. Lei is excited by their advancements and wants to keep improving performance of biodegradable electronics. “We currently have computers and cell phones and we generate millions and billions of cell phones, and it’s hard to decompose,” he said. “We hope we can develop some materials that can be decomposed so there is less waste.”

Other authors on the study include Ming Guan, Jia Liu, Hung-Cheng Lin, Raphael Pfattner, Leo Shaw, Allister McGuire, and Jeffrey Tok of Stanford University; Tsung-Ching Huang of Hewlett Packard Enterprise; and Lei-Lai Shao and Kwang-Ting Cheng of University of California, Santa Barbara.

The research was funded by the Air Force Office for Scientific Research; BASF; Marie Curie Cofund; Beatriu de Pinós fellowship; and the Kodak Graduate Fellowship.

Here’s a link to and a citation for the team’s latest paper,

Biocompatible and totally disintegrable semiconducting polymer for ultrathin and ultralightweight transient electronics by Ting Lei, Ming Guan, Jia Liu, Hung-Cheng Lin, Raphael Pfattner, Leo Shaw, Allister F. McGuire, Tsung-Ching Huang, Leilai Shao, Kwang-Ting Cheng, Jeffrey B.-H. Tok, and Zhenan Bao. PNAS 2017 doi: 10.1073/pnas.1701478114 published ahead of print May 1, 2017

This paper is behind a paywall.

The mention of cellulose in the second item piqued my interest so I checked to see if they’d used nanocellulose. No, they did not. Microcrystalline cellulose powder was used to constitute a cellulose film but they found a way to render this film at the nanoscale. From the Stanford paper (Note: Links have been removed),

… Moreover, cellulose films have been previously used as biodegradable substrates in electronics (28⇓–30). However, these cellulose films are typically made with thicknesses well over 10 μm and thus cannot be used to fabricate ultrathin electronics with substrate thicknesses below 1–2 μm (7, 18, 19). To the best of our knowledge, there have been no reports on ultrathin (1–2 μm) biodegradable substrates for electronics. Thus, to realize them, we subsequently developed a method described herein to obtain ultrathin (800 nm) cellulose films (Fig. 1B and SI Appendix, Fig. S8). First, microcrystalline cellulose powders were dissolved in LiCl/N,N-dimethylacetamide (DMAc) and reacted with hexamethyldisilazane (HMDS) (31, 32), providing trimethylsilyl-functionalized cellulose (TMSC) (Fig. 1B). To fabricate films or devices, TMSC in chlorobenzene (CB) (70 mg/mL) was spin-coated on a thin dextran sacrificial layer. The TMSC film was measured to be 1.2 μm. After hydrolyzing the film in 95% acetic acid vapor for 2 h, the trimethylsilyl groups were removed, giving a 400-nm-thick cellulose film. The film thickness significantly decreased to one-third of the original film thickness, largely due to the removal of the bulky trimethylsilyl groups. The hydrolyzed cellulose film is insoluble in most organic solvents, for example, toluene, THF, chloroform, CB, and water. Thus, we can sequentially repeat the above steps to obtain an 800-nm-thick film, which is robust enough for further device fabrication and peel-off. By soaking the device in water, the dextran layer is dissolved, starting from the edges of the device to the center. This process ultimately releases the ultrathin substrate and leaves it floating on water surface (Fig. 3A, Inset).

Finally, I don’t have any grand thoughts; it’s just interesting to see different approaches to flexible electronics.

Vector Institute and Canada’s artificial intelligence sector

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

In addition, Vector is expected to receive funding from the Province of Ontario and more than 30 top Canadian and global companies eager to tap this pool of talent to grow their businesses. The institute will also work closely with other Ontario universities with AI talent.

(See my March 24, 2017 posting; scroll down about 25% for the science part, including the Pan-Canadian Artificial Intelligence Strategy of the budget.)

Not obvious in last week’s coverage of the Pan-Canadian Artificial Intelligence Strategy is that the much lauded Hinton has been living in the US and working for Google. These latest announcements (Pan-Canadian AI Strategy and Vector Institute) mean that he’s moving back.

A March 28, 2017 article by Kate Allen for TorontoStar.com provides more details about the Vector Institute, Hinton, and the Canadian ‘brain drain’ as it applies to artificial intelligence, (Note:  A link has been removed)

Toronto will host a new institute devoted to artificial intelligence, a major gambit to bolster a field of research pioneered in Canada but consistently drained of talent by major U.S. technology companies like Google, Facebook and Microsoft.

The Vector Institute, an independent non-profit affiliated with the University of Toronto, will hire about 25 new faculty and research scientists. It will be backed by more than $150 million in public and corporate funding in an unusual hybridization of pure research and business-minded commercial goals.

The province will spend $50 million over five years, while the federal government, which announced a $125-million Pan-Canadian Artificial Intelligence Strategy in last week’s budget, is providing at least $40 million, backers say. More than two dozen companies have committed millions more over 10 years, including $5 million each from sponsors including Google, Air Canada, Loblaws, and Canada’s five biggest banks [Bank of Montreal (BMO). Canadian Imperial Bank of Commerce ({CIBC} President’s Choice Financial},  Royal Bank of Canada (RBC), Scotiabank (Tangerine), Toronto-Dominion Bank (TD Canada Trust)].

The mode of artificial intelligence that the Vector Institute will focus on, deep learning, has seen remarkable results in recent years, particularly in image and speech recognition. Geoffrey Hinton, considered the “godfather” of deep learning for the breakthroughs he made while a professor at U of T, has worked for Google since 2013 in California and Toronto.

Hinton will move back to Canada to lead a research team based at the tech giant’s Toronto offices and act as chief scientific adviser of the new institute.

Researchers trained in Canadian artificial intelligence labs fill the ranks of major technology companies, working on tools like instant language translation, facial recognition, and recommendation services. Academic institutions and startups in Toronto, Waterloo, Montreal and Edmonton boast leaders in the field, but other researchers have left for U.S. universities and corporate labs.

The goals of the Vector Institute are to retain, repatriate and attract AI talent, to create more trained experts, and to feed that expertise into existing Canadian companies and startups.

Hospitals are expected to be a major partner, since health care is an intriguing application for AI. Last month, researchers from Stanford University announced they had trained a deep learning algorithm to identify potentially cancerous skin lesions with accuracy comparable to human dermatologists. The Toronto company Deep Genomics is using deep learning to read genomes and identify mutations that may lead to disease, among other things.

Intelligent algorithms can also be applied to tasks that might seem less virtuous, like reading private data to better target advertising. Zemel [Richard Zemel, the institute’s research director and a professor of computer science at U of T] says the centre is creating an ethics working group [emphasis mine] and maintaining ties with organizations that promote fairness and transparency in machine learning. As for privacy concerns, “that’s something we are well aware of. We don’t have a well-formed policy yet but we will fairly soon.”

The institute’s annual funding pales in comparison to the revenues of the American tech giants, which are measured in tens of billions. The risk the institute’s backers are taking is simply creating an even more robust machine learning PhD mill for the U.S.

“They obviously won’t all stay in Canada, but Toronto industry is very keen to get them,” Hinton said. “I think Trump might help there.” Two researchers on Hinton’s new Toronto-based team are Iranian, one of the countries targeted by U.S. President Donald Trump’s travel bans.

Ethics do seem to be a bit of an afterthought. Presumably the Vector Institute’s ‘ethics working group’ won’t include any regular folks. Is there any thought to what the rest of us think about these developments? As there will also be some collaboration with other proposed AI institutes including ones at the University of Montreal (Université de Montréal) and the University of Alberta (Kate McGillivray’s article coming up shortly mentions them), might the ethics group be centered in either Edmonton or Montreal? Interestingly, two Canadians (Timothy Caulfield at the University of Alberta and Eric Racine at Université de Montréa) testified at the US Commission for the Study of Bioethical Issues Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology. Still speculating here but I imagine Caulfield and/or Racine could be persuaded to extend their expertise in ethics and the human brain to AI and its neural networks.

Getting back to the topic at hand the ‘AI sceneCanada’, Allen’s article is worth reading in its entirety if you have the time.

Kate McGillivray’s March 29, 2017 article for the Canadian Broadcasting Corporation’s (CBC) news online provides more details about the Canadian AI situation and the new strategies,

With artificial intelligence set to transform our world, a new institute is putting Toronto to the front of the line to lead the charge.

The Vector Institute for Artificial Intelligence, made possible by funding from the federal government revealed in the 2017 budget, will move into new digs in the MaRS Discovery District by the end of the year.

Vector’s funding comes partially from a $125 million investment announced in last Wednesday’s federal budget to launch a pan-Canadian artificial intelligence strategy, with similar institutes being established in Montreal and Edmonton.

“[A.I.] cuts across pretty well every sector of the economy,” said Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program.

“Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that,” he said.

Stopping up the brain drain

Critical to the strategy’s success is building a homegrown base of A.I. experts and innovators — a problem in the last decade, despite pioneering work on so-called “Deep Learning” by Canadian scholars such as Yoshua Bengio and Geoffrey Hinton, a former University of Toronto professor who will now serve as Vector’s chief scientific advisor.

With few university faculty positions in Canada and with many innovative companies headquartered elsewhere, it has been tough to keep the few graduates specializing in A.I. in town.

“We were paying to educate people and shipping them south,” explained Ed Clark, chair of the Vector Institute and business advisor to Ontario Premier Kathleen Wynne.

The existence of that “fantastic science” will lean heavily on how much buy-in Vector and Canada’s other two A.I. centres get.

Toronto’s portion of the $125 million is a “great start,” said Bernstein, but taken alone, “it’s not enough money.”

“My estimate of the right amount of money to make a difference is a half a billion or so, and I think we will get there,” he said.

Jessica Murphy’s March 29, 2017 article for the British Broadcasting Corporation’s (BBC) news online offers some intriguing detail about the Canadian AI scene,

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC [Royal Bank of Canada].

In an unassuming building on the University of Toronto’s downtown campus, Geoff Hinton laboured for years on the “lunatic fringe” of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as “deep learning”, neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

“The approaches that I thought were silly were in the ascendancy and the approach that I thought was the right approach was regarded as silly,” says the British-born [emphasis mine] professor, who splits his time between the university and Google, where he is a vice-president of engineering fellow.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go in 2016.

Foteini Agrafioti, who heads up the new RBC Research in Machine Learning lab at the University of Toronto, said those recent innovations made AI attractive to researchers and the tech industry.

“Anything that’s powering Google’s engines right now is powered by deep learning,” she says.

Developments in the field helped jumpstart innovation and paved the way for the technology’s commercialisation. They also captured the attention of Google, IBM and Microsoft, and kicked off a hiring race in the field.

The renewed focus on neural networks has boosted the careers of early Canadian AI machine learning pioneers like Hinton, the University of Montreal’s Yoshua Bengio, and University of Alberta’s Richard Sutton.

Money from big tech is coming north, along with investments by domestic corporations like banking multinational RBC and auto parts giant Magna, and millions of dollars in government funding.

Former banking executive Ed Clark will head the institute, and says the goal is to make Toronto, which has the largest concentration of AI-related industries in Canada, one of the top five places in the world for AI innovation and business.

The founders also want it to serve as a magnet and retention tool for top talent aggressively head-hunted by US firms.

Clark says they want to “wake up” Canadian industry to the possibilities of AI, which is expected to have a massive impact on fields like healthcare, banking, manufacturing and transportation.

Google invested C$4.5m (US$3.4m/£2.7m) last November [2016] in the University of Montreal’s Montreal Institute for Learning Algorithms.

Microsoft is funding a Montreal startup, Element AI. The Seattle-based company also announced it would acquire Montreal-based Maluuba and help fund AI research at the University of Montreal and McGill University.

Thomson Reuters and General Motors both recently moved AI labs to Toronto.

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta’s Machine Intelligence Institute.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark’s the selling points is that Toronto as an “open and diverse” city).

This may reverse the ‘brain drain’ but it appears Canada’s role as a ‘branch plant economy’ for foreign (usually US) companies could become an important discussion once more. From the ‘Foreign ownership of companies of Canada’ Wikipedia entry (Note: Links have been removed),

Historically, foreign ownership was a political issue in Canada in the late 1960s and early 1970s, when it was believed by some that U.S. investment had reached new heights (though its levels had actually remained stable for decades), and then in the 1980s, during debates over the Free Trade Agreement.

But the situation has changed, since in the interim period Canada itself became a major investor and owner of foreign corporations. Since the 1980s, Canada’s levels of investment and ownership in foreign companies have been larger than foreign investment and ownership in Canada. In some smaller countries, such as Montenegro, Canadian investment is sizable enough to make up a major portion of the economy. In Northern Ireland, for example, Canada is the largest foreign investor. By becoming foreign owners themselves, Canadians have become far less politically concerned about investment within Canada.

Of note is that Canada’s largest companies by value, and largest employers, tend to be foreign-owned in a way that is more typical of a developing nation than a G8 member. The best example is the automotive sector, one of Canada’s most important industries. It is dominated by American, German, and Japanese giants. Although this situation is not unique to Canada in the global context, it is unique among G-8 nations, and many other relatively small nations also have national automotive companies.

It’s interesting to note that sometimes Canadian companies are the big investors but that doesn’t change our basic position. And, as I’ve noted in other postings (including the March 24, 2017 posting), these government investments in science and technology won’t necessarily lead to a move away from our ‘branch plant economy’ towards an innovative Canada.

You can find out more about the Vector Institute for Artificial Intelligence here.

BTW, I noted that reference to Hinton as ‘British-born’ in the BBC article. He was educated in the UK and subsidized by UK taxpayers (from his Wikipedia entry; Note: Links have been removed),

Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.[1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1977 for research supervised by H. Christopher Longuet-Higgins.[3][12]

It seems Canadians are not the only ones to experience  ‘brain drains’.

Finally, I wrote at length about a recent initiative taking place between the University of British Columbia (Vancouver, Canada) and the University of Washington (Seattle, Washington), the Cascadia Urban Analytics Cooperative in a Feb. 28, 2017 posting noting that the initiative is being funded by Microsoft to the tune $1M and is part of a larger cooperative effort between the province of British Columbia and the state of Washington. Artificial intelligence is not the only area where US technology companies are hedging their bets (against Trump’s administration which seems determined to terrify people from crossing US borders) by investing in Canada.

For anyone interested in a little more information about AI in the US and China, there’s today’s (March 31, 2017)earlier posting: China, US, and the race for artificial intelligence research domination.

High-performance, low-energy artificial synapse for neural network computing

This artificial synapse is apparently an improvement on the standard memristor-based artificial synapse but that doesn’t become clear until reading the abstract for the paper. First, there’s a Feb. 20, 2017 Stanford University news release by Taylor Kubota (dated Feb. 21, 2017 on EurekAlert), Note: Links have been removed,

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

Here’s an abstract for the researchers’ paper (link to paper provided after abstract) and it’s where you’ll find the memristor connection explained,

The brain is capable of massively parallel information processing while consuming only ~1–100fJ per synaptic event1, 2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4, 5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10pJ for 103μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6, 7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Here’s a link to and a citation for the paper,

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing by Yoeri van de Burgt, Ewout Lubberman, Elliot J. Fuller, Scott T. Keene, Grégorio C. Faria, Sapan Agarwal, Matthew J. Marinella, A. Alec Talin, & Alberto Salleo. Nature Materials (2017) doi:10.1038/nmat4856 Published online 20 February 2017

This paper is behind a paywall.

ETA March 8, 2017 10:28 PST: You may find this this piece on ferroelectricity and neuromorphic engineering of interest (March 7, 2017 posting titled: Ferroelectric roadmap to neuromorphic computing).

Going underground to observe atoms in a bid for better batteries

A Jan. 16, 2017 news item on ScienceDaily describes what lengths researchers at Stanford University (US) will go to in pursuit of their goals,

In a lab 18 feet below the Engineering Quad of Stanford University, researchers in the Dionne lab camped out with one of the most advanced microscopes in the world to capture an unimaginably small reaction.

The lab members conducted arduous experiments — sometimes requiring a continuous 30 hours of work — to capture real-time, dynamic visualizations of atoms that could someday help our phone batteries last longer and our electric vehicles go farther on a single charge.

Toiling underground in the tunneled labs, they recorded atoms moving in and out of nanoparticles less than 100 nanometers in size, with a resolution approaching 1 nanometer.

A Jan. 16, 2017 Stanford University news release (also on EurekAlert) by Taylor Kubota, which originated the news item, provides more detail,

“The ability to directly visualize reactions in real time with such high resolution will allow us to explore many unanswered questions in the chemical and physical sciences,” said Jen Dionne, associate professor of materials science and engineering at Stanford and senior author of the paper detailing this work, published Jan. 16 [2017] in Nature Communications. “While the experiments are not easy, they would not be possible without the remarkable advances in electron microscopy from the past decade.”

Their experiments focused on hydrogen moving into palladium, a class of reactions known as an intercalation-driven phase transition. This reaction is physically analogous to how ions flow through a battery or fuel cell during charging and discharging. Observing this process in real time provides insight into why nanoparticles make better electrodes than bulk materials and fits into Dionne’s larger interest in energy storage devices that can charge faster, hold more energy and stave off permanent failure.

Technical complexity and ghosts

For these experiments, the Dionne lab created palladium nanocubes, a form of nanoparticle, that ranged in size from about 15 to 80 nanometers, and then placed them in a hydrogen gas environment within an electron microscope. The researchers knew that hydrogen would change both the dimensions of the lattice and the electronic properties of the nanoparticle. They thought that, with the appropriate microscope lens and aperture configuration, techniques called scanning transmission electron microscopy and electron energy loss spectroscopy might show hydrogen uptake in real time.

After months of trial and error, the results were extremely detailed, real-time videos of the changes in the particle as hydrogen was introduced. The entire process was so complicated and novel that the first time it worked, the lab didn’t even have the video software running, leading them to capture their first movie success on a smartphone.

Following these videos, they examined the nanocubes during intermediate stages of hydrogenation using a second technique in the microscope, called dark-field imaging, which relies on scattered electrons. In order to pause the hydrogenation process, the researchers plunged the nanocubes into an ice bath of liquid nitrogen mid-reaction, dropping their temperature to 100 degrees Kelvin (-280 F). These dark-field images served as a way to check that the application of the electron beam hadn’t influenced the previous observations and allowed the researchers to see detailed structural changes during the reaction.

“With the average experiment spanning about 24 hours at this low temperature, we faced many instrument problems and called Ai Leen Koh [co-author and research scientist at Stanford’s Nano Shared Facilities] at the weirdest hours of the night,” recalled Fariah Hayee, co-lead author of the study and graduate student in the Dionne lab. “We even encountered a ‘ghost-of-the-joystick problem,’ where the joystick seemed to move the sample uncontrollably for some time.”

While most electron microscopes operate with the specimen held in a vacuum, the microscope used for this research has the advanced ability to allow the researchers to introduce liquids or gases to their specimen.

“We benefit tremendously from having access to one of the best microscope facilities in the world,” said Tarun Narayan, co-lead author of this study and recent doctoral graduate from the Dionne lab. “Without these specific tools, we wouldn’t be able to introduce hydrogen gas or cool down our samples enough to see these processes take place.”

Pushing out imperfections

Aside from being a widely applicable proof of concept for this suite of visualization techniques, watching the atoms move provides greater validation for the high hopes many scientists have for nanoparticle energy storage technologies.

The researchers saw the atoms move in through the corners of the nanocube and observed the formation of various imperfections within the particle as hydrogen moved within it. This sounds like an argument against the promise of nanoparticles but that’s because it’s not the whole story.

“The nanoparticle has the ability to self-heal,” said Dionne. “When you first introduce hydrogen, the particle deforms and loses its perfect crystallinity. But once the particle has absorbed as much hydrogen as it can, it transforms itself back to a perfect crystal again.”

The researchers describe this as imperfections being “pushed out” of the nanoparticle. This ability of the nanocube to self-heal makes it more durable, a key property needed for energy storage materials that can sustain many charge and discharge cycles.

Looking toward the future

As the efficiency of renewable energy generation increases, the need for higher quality energy storage is more pressing than ever. It’s likely that the future of storage will rely on new chemistries and the findings of this research, including the microscopy techniques the researchers refined along the way, will apply to nearly any solution in those categories.

For its part, the Dionne lab has many directions it can go from here. The team could look at a variety of material compositions, or compare how the sizes and shapes of nanoparticles affect the way they work, and, soon, take advantage of new upgrades to their microscope to study light-driven reactions. At present, Hayee has moved on to experimenting with nanorods, which have more surface area for the ions to move through, promising potentially even faster kinetics.

Here’s a link to and a citation for the paper,

Direct visualization of hydrogen absorption dynamics in individual palladium nanoparticles by Tarun C. Narayan, Fariah Hayee, Andrea Baldi, Ai Leen Koh, Robert Sinclair, & Jennifer A. Dionne. Nature Communications 8, Article number: 14020 (2017) doi:10.1038/ncomms14020 Published online: 16 January 2017

This paper is open access.

Novel self-assembly at 102 atoms

A Jan. 13, 2017 news item on ScienceDaily announces a discovery about self-assembly of 102-atom gold nanoclusters,

Self-assembly of matter is one of the fundamental principles of nature, directing the growth of larger ordered and functional systems from smaller building blocks. Self-assembly can be observed in all length scales from molecules to galaxies. Now, researchers at the Nanoscience Centre of the University of Jyväskylä and the HYBER Centre of Excellence of Aalto University in Finland report a novel discovery of self-assembling two- and three-dimensional materials that are formed by tiny gold nanoclusters of just a couple of nanometres in size, each having 102 gold atoms and a surface layer of 44 thiol molecules. The study, conducted with funding from the Academy of Finland and the European Research Council, has been published in Angewandte Chemie.

A Jan. 13, 2017 Academy of Finland press release, which originated the news item, provides more technical information about the work,

The atomic structure of the 102-atom gold nanocluster was first resolved by the group of Roger D Kornberg at Stanford University in 2007 (2). Since then, several further studies of its properties have been conducted in the Jyväskylä Nanoscience Centre, where it has also been used for electron microscopy imaging of virus structures (3). The thiol surface of the nanocluster has a large number of acidic groups that can form directed hydrogen bonds to neighbouring nanoclusters and initiate directed self-assembly.

The self-assembly of gold nanoclusters took place in a water-methanol mixture and produced two distinctly different superstructures that were imaged in a high-resolution electron microscope at Aalto University. In one of the structures, two-dimensional hexagonally ordered layers of gold nanoclusters were stacked together, each layer being just one nanocluster thick. Modifying the synthesis conditions, also three-dimensional spherical, hollow capsid structures were observed, where the thickness of the capsid wall corresponds again to just one nanocluster size (see figure).

While the details of the formation mechanisms of these superstructures warrant further systemic investigations, the initial observations open several new views into synthetically made self-assembling nanomaterials.

“Today, we know of several tens of different types of atomistically precise gold nanoclusters, and I believe they can exhibit a wide variety of self-assembling growth patterns that could produce a range of new meta-materials,” said Academy Professor Hannu Häkkinen, who coordinated the research at the Nanoscience Centre. “In biology, typical examples of self-assembling functional systems are viruses and vesicles. Biological self-assembled structures can also be de-assembled by gentle changes in the surrounding biochemical conditions. It’ll be of great interest to see whether these gold-based materials can be de-assembled and then re-assembled to different structures by changing something in the chemistry of the surrounding solvent.”

“The free-standing two-dimensional nanosheets will bring opportunities towards new-generation functional materials, and the hollow capsids will pave the way for highly lightweight colloidal framework materials,” Postdoctoral Researcher Nonappa (Aalto University) said.

Professor Olli Ikkala of Aalto University said: “In a broader framework, it has remained as a grand challenge to master the self-assemblies through all length scales to tune the functional properties of materials in a rational way. So far, it has been commonly considered sufficient to achieve sufficiently narrow size distributions of the constituent nanoscale structural units to achieve well-defined structures. The present findings suggest a paradigm change to pursue strictly defined nanoscale units for self-assemblies.”


(1)    Nonappa, T. Lahtinen, J.S. Haataja, T.-R. Tero, H. Häkkinen and O. Ikkala, “Template-Free Supracolloidal Self-Assembly of Atomically Precise Gold Nanoclusters: From 2D Colloidal Crystals to Spherical Capsids”, Angewandte Chemie International Edition, published online 23 November 2016, DOI: 10.1002/anie.201609036

(2)    P. Jadzinsky et al., “Structure of a thiol-monolayer protected gold nanoparticle at 1.1Å resolution”, Science 318, 430 (2007)

(3)    V. Marjomäki et al., “Site-specific targeting of enterovirus capsid by functionalized monodispersed gold nanoclusters”, PNAS 111, 1277 (2014)

Here’s the figure mentioned in the news release,

Figure: 2D hexagonal sheet-like and 3D capsid structures based on atomically precise gold nanoclusters as guided by hydrogen bonding between the ligands. The inset in the top left corner shows the atomic structure of one gold nanocluster.

Here’s a link to and a citation for the paper,

Template-Free Supracolloidal Self-Assembly of Atomically Precise Gold Nanoclusters: From 2D Colloidal Crystals to Spherical Capsids by Dr. Nonappa, Dr. Tanja Lahtinen, M. Sc. Johannes. S. Haataja, Dr. Tiia-Riikka Tero, Prof. Hannu Häkkinen, and Prof. Olli Ikkala. Angewandte Chemie International Edition Volume 55, Issue 52, pages 16035–16038, December 23, 2016 Version of Record online: 23 NOV 2016 DOI: 10.1002/anie.201609036

© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

‘Smart’ fabric that’s bony

Researchers at Australia’s University of New South of Wales (UNSW) have devised a means of ‘weaving’ a material that mimics *bone tissue, periosteum according to a Jan. 11, 2017 news item on ScienceDaily,

For the first time, UNSW [University of New South Wales] biomedical engineers have woven a ‘smart’ fabric that mimics the sophisticated and complex properties of one nature’s ingenious materials, the bone tissue periosteum.

Having achieved proof of concept, the researchers are now ready to produce fabric prototypes for a range of advanced functional materials that could transform the medical, safety and transport sectors. Patents for the innovation are pending in Australia, the United States and Europe.

Potential future applications range from protective suits that stiffen under high impact for skiers, racing-car drivers and astronauts, through to ‘intelligent’ compression bandages for deep-vein thrombosis that respond to the wearer’s movement and safer steel-belt radial tyres.

A Jan. 11, 2017 UNSW press release on EurekAlert, which originated the news item, expands on the theme,

Many animal and plant tissues exhibit ‘smart’ and adaptive properties. One such material is the periosteum, a soft tissue sleeve that envelops most bony surfaces in the body. The complex arrangement of collagen, elastin and other structural proteins gives periosteum amazing resilience and provides bones with added strength under high impact loads.

Until now, a lack of scalable ‘bottom-up’ approaches by researchers has stymied their ability to use smart tissues to create advanced functional materials.

UNSW’s Paul Trainor Chair of Biomedical Engineering, Professor Melissa Knothe Tate, said her team had for the first time mapped the complex tissue architectures of the periosteum, visualised them in 3D on a computer, scaled up the key components and produced prototypes using weaving loom technology.

“The result is a series of textile swatch prototypes that mimic periosteum’s smart stress-strain properties. We have also demonstrated the feasibility of using this technique to test other fibres to produce a whole range of new textiles,” Professor Knothe Tate said.

In order to understand the functional capacity of the periosteum, the team used an incredibly high fidelity imaging system to investigate and map its architecture.

“We then tested the feasibility of rendering periosteum’s natural tissue weaves using computer-aided design software,” Professor Knothe Tate said.

The computer modelling allowed the researchers to scale up nature’s architectural patterns to weave periosteum-inspired, multidimensional fabrics using a state-of-the-art computer-controlled jacquard loom. The loom is known as the original rudimentary computer, first unveiled in 1801.

“The challenge with using collagen and elastin is their fibres, that are too small to fit into the loom. So we used elastic material that mimics elastin and silk that mimics collagen,” Professor Knothe Tate said.

In a first test of the scaled-up tissue weaving concept, a series of textile swatch prototypes were woven, using specific combinations of collagen and elastin in a twill pattern designed to mirror periosteum’s weave. Mechanical testing of the swatches showed they exhibited similar properties found in periosteum’s natural collagen and elastin weave.

First author and biomedical engineering PhD candidate, Joanna Ng, said the technique had significant implications for the development of next-generation advanced materials and mechanically functional textiles.

While the materials produced by the jacquard loom have potential manufacturing applications – one tyremaker believes a titanium weave could spawn a new generation of thinner, stronger and safer steel-belt radials – the UNSW team is ultimately focused on the machine’s human potential.

“Our longer term goal is to weave biological tissues – essentially human body parts – in the lab to replace and repair our failing joints that reflect the biology, architecture and mechanical properties of the periosteum,” Ms Ng said.

An NHMRC development grant received in November [2016] will allow the team to take its research to the next phase. The researchers will work with the Cleveland Clinic and the University of Sydney’s Professor Tony Weiss to develop and commercialise prototype bone implants for pre-clinical research, using the ‘smart’ technology, within three years.

In searching for more information about this work, I found a Winter 2015 article (PDF; pp. 8-11) by Amy Coopes and Steve Offner for UNSW Magazine about Knothe Tate and her work (Note: In Australia, winter would be what we in the Northern Hemisphere consider summer),

Tucked away in a small room in UNSW’s Graduate School of Biomedical Engineering sits a 19th century–era weaver’s wooden loom. Operated by punch cards and hooks, the machine was the first rudimentary computer when it was unveiled in 1801. While on the surface it looks like a standard Jacquard loom, it has been enhanced with motherboards integrated into each of the loom’s five hook modules and connected to a computer. This state-of-the-art technology means complex algorithms control each of the 5,000 feed-in fibres with incredible precision.

That capacity means the loom can weave with an extraordinary variety of substances, from glass and titanium to rayon and silk, a development that has attracted industry attention around the world.

The interest lies in the natural advantage woven materials have over other manufactured substances. Instead of manipulating material to create new shades or hues as in traditional weaving, the fabrics’ mechanical properties can be modulated, to be stiff at one end, for example, and more flexible at the other.

“Instead of a pattern of colours we get a pattern of mechanical properties,” says Melissa Knothe Tate, UNSW’s Paul Trainor Chair of Biomedical Engineering. “Think of a rope; it’s uniquely good in tension and in bending. Weaving is naturally strong in that way.”

The interface of mechanics and physiology is the focus of Knothe Tate’s work. In March [2015], she travelled to the United States to present another aspect of her work at a meeting of the international Orthopedic Research Society in Las Vegas. That project – which has been dubbed “Google Maps for the body” – explores the interaction between cells and their environment in osteoporosis and other degenerative musculoskeletal conditions such as osteoarthritis.

Using previously top-secret semiconductor technology developed by optics giant Zeiss, and the same approach used by Google Maps to locate users with pinpoint accuracy, Knothe Tate and her team have created “zoomable” anatomical maps from the scale of a human joint down to a single cell.

She has also spearheaded a groundbreaking partnership that includes the Cleveland Clinic, and Brown and Stanford universities to help crunch terabytes of data gathered from human hip studies – all processed with the Google technology. Analysis that once took 25 years can now be done in a matter of weeks, bringing researchers ever closer to a set of laws that govern biological behaviour. [p. 9]

I gather she was recruited from the US to work at the University of New South Wales and this article was to highlight why they recruited her and to promote the university’s biomedical engineering department, which she chairs.

Getting back to 2017, here’s a link to and citation for the paper,

Scale-up of nature’s tissue weaving algorithms to engineer advanced functional materials by Joanna L. Ng, Lillian E. Knothe, Renee M. Whan, Ulf Knothe & Melissa L. Knothe Tate. Scientific Reports 7, Article number: 40396 (2017) doi:10.1038/srep40396 Published online: 11 January 2017

This paper is open access.

One final comment, that’s a lot of people (three out of five) with the last name Knothe in the author’s list for the paper.

*’the bone tissue’ changed to ‘bone tissue’ on July 17,2017.

Investigating nanoparticles and their environmental impact for industry?

It seems the Center for the Environmental Implications of Nanotechnology (CEINT) at Duke University (North Carolina, US) is making an adjustment to its focus and opening the door to industry, as well as, government research. It has for some years (my first post about the CEINT at Duke University is an Aug. 15, 2011 post about its mesocosms) been focused on examining the impact of nanoparticles (also called nanomaterials) on plant life and aquatic systems. This Jan. 9, 2017 US National Science Foundation (NSF) news release (h/t Jan. 9, 2017 Nanotechnology Now news item) provides a general description of the work,

We can’t see them, but nanomaterials, both natural and manmade, are literally everywhere, from our personal care products to our building materials–we’re even eating and drinking them.

At the NSF-funded Center for Environmental Implications of Nanotechnology (CEINT), headquartered at Duke University, scientists and engineers are researching how some of these nanoscale materials affect living things. One of CEINT’s main goals is to develop tools that can help assess possible risks to human health and the environment. A key aspect of this research happens in mesocosms, which are outdoor experiments that simulate the natural environment – in this case, wetlands. These simulated wetlands in Duke Forest serve as a testbed for exploring how nanomaterials move through an ecosystem and impact living things.

CEINT is a collaborative effort bringing together researchers from Duke, Carnegie Mellon University, Howard University, Virginia Tech, University of Kentucky, Stanford University, and Baylor University. CEINT academic collaborations include on-going activities coordinated with faculty at Clemson, North Carolina State and North Carolina Central universities, with researchers at the National Institute of Standards and Technology and the Environmental Protection Agency labs, and with key international partners.

The research in this episode was supported by NSF award #1266252, Center for the Environmental Implications of NanoTechnology.

The mention of industry is in this video by O’Brien and Kellan, which describes CEINT’s latest work ,

Somewhat similar in approach although without a direction reference to industry, Canada’s Experimental Lakes Area (ELA) is being used as a test site for silver nanoparticles. Here’s more from the Distilling Science at the Experimental Lakes Area: Nanosilver project page,

Water researchers are interested in nanotechnology, and one of its most commonplace applications: nanosilver. Today these tiny particles with anti-microbial properties are being used in a wide range of consumer products. The problem with nanoparticles is that we don’t fully understand what happens when they are released into the environment.

The research at the IISD-ELA [International Institute for Sustainable Development Experimental Lakes Area] will look at the impacts of nanosilver on ecosystems. What happens when it gets into the food chain? And how does it affect plants and animals?

Here’s a video describing the Nanosilver project at the ELA,

You may have noticed a certain tone to the video and it is due to some political shenanigans, which are described in this Aug. 8, 2016 article by Bartley Kives for the Canadian Broadcasting Corporation’s (CBC) online news.

Bionic pancreas tested at home

This news about a bionic pancreas must be exciting for diabetics as it would eliminate the need for constant blood sugar testing throughout the day. From a Dec. 19, 2016 Massachusetts General Hospital news release (also on EurekAlert), Note: Links have been removed,

The bionic pancreas system developed by Boston University (BU) investigators proved better than either conventional or sensor-augmented insulin pump therapy at managing blood sugar levels in patients with type 1 diabetes living at home, with no restrictions, over 11 days. The report of a clinical trial led by a Massachusetts General Hospital (MGH) physician is receiving advance online publication in The Lancet.

“For study participants living at home without limitations on their activity and diet, the bionic pancreas successfully reduced average blood glucose, while at the same time decreasing the risk of hypoglycemia,” says Steven Russell, MD, PhD, of the MGH Diabetes Unit. “This system requires no information other than the patient’s body weight to start, so it will require much less time and effort by health care providers to initiate treatment. And since no carbohydrate counting is required, it significantly reduces the burden on patients associated with diabetes management.”

Developed by Edward Damiano, PhD, and Firas El-Khatib, PhD, of the BU Department of Biomedical Engineering, the bionic pancreas controls patients’ blood sugar with both insulin and glucagon, a hormone that increases glucose levels. After a 2010 clinical trial confirmed that the original version of the device could maintain near-normal blood sugar levels for more than 24 hours in adult patients, two follow-up trials – reported in a 2014 New England Journal of Medicine paper – showed that an updated version of the system successfully controlled blood sugar levels in adults and adolescents for five days.  Another follow-up trial published in The Lancet Diabetes and Endocrinology in 2016  showed it could do the same for children as young as 6 years of age.

While minimal restrictions were placed on participants in the 2014 trials, participants in both spent nights in controlled settings and were accompanied at all times by either a nurse for the adult trial or remained in a diabetes camp for the adolescent and pre-adolescent trials. Participants in the current trial had no such restrictions placed upon them, as they were able to pursue normal activities at home or at work with no imposed limitations on diet or exercise. Patients needed to live within a 30-minute drive of one of the trial sites – MGH, the University of Massachusetts Medical School, Stanford University, and the University of North Carolina at Chapel Hill – and needed to designate a contact person who lived with them and could be contacted by study staff, if necessary.

The bionic pancreas system – the same as that used in the 2014 studies – consisted of a smartphone (iPhone 4S) that could wirelessly communicate with two pumps delivering either insulin or glucagon. Every five minutes the smartphone received a reading from an attached continuous glucose monitor, which was used to calculate and administer a dose of either insulin or glucagon. The algorighms controlling the system were updated for the current trial to better respond to blood sugar variations.

While the device allows participants to enter information about each upcoming meal into a smartphone app, allowing the system to deliver an anticipatory insulin dose, such entries were optional in the current trial. If participants’ blood sugar dropped to dangerous levels or if the monitor or one of the pumps was disconnected for more than 15 minutes, the system would alerted study staff, allowing them to check with the participants or their contact persons.

Study participants were adults who had been diagnosed with type 1 diabetes for a year or more and had used an insulin pump to manage their care for at least six months. Each of 39 participants that finished the study completed two 11-day study periods, one using the bionic pancreas and one using their usual insulin pump and any continous glucose monitor they had been using. In addition to the automated monitoring of glucose levels and administered doses of insulin or glucagon, participants completed daily surveys regarding any episodes of symptomatic hypoglycemia, carbohydrates consumed to treat those episodes, and any episodes of nausea.

On days when participants were on the bionic pancreas, their average blood glucose levels were significantly lower – 141 mg/dl versus 162 mg/dl – than when on their standard treatment. Blood sugar levels were at levels indicating hypoglycemia (less than 60 mg/dl) for 0.6 percent of the time when participants were on the bionic pancreas, versus 1.9 percent of the time on standard treatment. Participants reported fewer episodes of symptomatic hypoglycemia while on the bionic pancreas, and no episodes of severe hypoglycemia were associated with the system.

The system performed even better during the overnight period, when the risk of hypoglycemia is particularly concerning. “Patients with type 1 diabetes worry about developing hypoglycemia when they are sleeping and tend to let their blood sugar run high at night to reduce that risk,” explains Russell, an assistant professor of Medicine at Harvard Medical School. “Our study showed that the bionic pancreas reduced the risk of overnight hypoglycemia to almost nothing without raising the average glucose level. In fact the improvement in average overnight glucose was greater than the improvement in average glucose over the full 24-hour period.”

Damiano, whose work on this project is inspired by his own 17-year-old son’s type 1 diabetes, adds, “The availability of the bionic pancreas would dramatically change the life of people with diabetes by reducing average glucose levels – thereby reducing the risk of diabetes complications – reducing the risk of hypoglycemia, which is a constant fear of patients and their families, and reducing the emotional burden of managing type 1 diabetes.” A co-author of the Lancet report, Damiano is a professor of Biomedical Engineering at Boston University.

The BU patents covering the bionic pancreas have been licensed to Beta Bionics, a startup company co-founded by Damiano and El-Khatib. The company’s latest version of the bionic pancreas, called the iLet, integrates all components into a single unit, which will be tested in future clinical trials. People interested in participating in upcoming trials may contact Russell’s team at the MGH Diabetes Research Center in care of Llazar Cuko (LCUKO@mgh.harvard.edu ).

Here`s a link to and a citation for the paper,

Home use of a bihormonal bionic pancreas versus insulin pump therapy in adults with type 1 diabetes: a multicentre randomised crossover trial by Firas H El-Khatib, Courtney Balliro, Mallory A Hillard, Kendra L Magyar, Laya Ekhlaspour, Manasi Sinha, Debbie Mondesir, Aryan Esmaeili, Celia Hartigan, Michael J Thompson, Samir Malkani, J Paul Lock, David M Harlan, Paula Clinton, Eliana Frank, Darrell M Wilson, Daniel DeSalvo, Lisa Norlander, Trang Ly, Bruce A Buckingham, Jamie Diner, Milana Dezube, Laura A Young, April Goley, M Sue Kirkman, John B Buse, Hui Zheng, Rajendranath R Selagamsetty, Edward R Damiano, Steven J Russell. Lancet DOI: http://dx.doi.org/10.1016/S0140-6736(16)32567-3  Published: 19 December 2016

This paper is behind a paywall.

You can find out more about Beta Bionics and iLet here.