Tag Archives: Andrew Myers

Are nano electronics as good as gold?

“As good as gold” was a behavioural goal when I was a child. It turns out, the same can be said of gold in electronic devices according to the headline for a March 26, 2020 news item on Nanowerk (Note: Links have been removed),

As electronics shrink to nanoscale, will they still be good as gold?

Deep inside computer chips, tiny wires made of gold and other conductive metals carry the electricity used to process data.

But as these interconnected circuits shrink to nanoscale, engineers worry that pressure, such as that caused by thermal expansion when current flows through these wires, might cause gold to behave more like a liquid than a solid, making nanoelectronics unreliable. That, in turn, could force chip designers to hunt for new materials to make these critical wires.

But according to a new paper in Physical Review Letters (“Nucleation of Dislocations in 3.9 nm Nanocrystals at High Pressure”), chip designers can rest easy. “Gold still behaves like a solid at these small scales,” says Stanford mechanical engineer Wendy Gu, who led a team that figured out how to pressurize gold particles just 4 nanometers in length — the smallest particles ever measured — to assess whether current flows might cause the metal’s atomic structure to collapse.

I have seen the issue about gold as a metal or liquid before but I can’t find it here (search engines, sigh). However, I found this somewhat related story from almost five years ago. In my April 14, 2015 posting (Gold atoms: sometimes they’re a metal and sometimes they’re a molecule), there was news that the number of gold atoms present means the difference between being a metal and being a molecule .This could have implications as circuit elements (which include some gold in their fabrication) shrink down past a certain point.

A March 24, 2020 Stanford University news release (also on Eurekalert but published on March 25, 2020) by Andrew Myers, which originated the news item, provides details about research designed to investigate a similar question, i.e, can we used gold as we shrink the scale?*,

To conduct the experiment, Gu’s team first had to devise a way put tiny gold particles under extreme pressure, while simultaneously measuring how much that pressure damaged gold’s atomic structure.

To solve the first problem, they turned to the field of high-pressure physics to borrow a device known as a diamond anvil cell. As the name implies, both hammer and anvil are diamonds that are used to compress the gold. As Gu explained, a nanoparticle of gold is built like a skyscraper with atoms forming a crystalline lattice of neat rows and columns. She knew that pressure from the anvil would dislodge some atoms from the crystal and create tiny defects in the gold.

The next challenge was to detect these defects in nanoscale gold. The scientists shined X-rays through the diamond onto the gold. Defects in the crystal caused the X-rays to reflect at different angles than they would on uncompressed gold. By measuring variations in the angles at which the X-rays bounced off the particles before and after pressure was applied, the team was able to tell whether the particles retained the deformations or reverted to their original state when pressure was lifted.

In practical terms, her findings mean that chipmakers can know with certainty that they’ll be able to design stable nanodevices using gold — a material they have known and trusted for decades — for years to come.

“For the foreseeable future, gold’s luster will not fade,” Gu says.

*The 2015 research measured the gold nanoclusters by the number of atoms within the cluster with the changes occurring at some where between 102 atoms and 144 atoms. This 2020 work measures the amount of gold by nanometers as in 3.9 nm gold nanocrystals . So, how many gold atoms in a nanometer? Cathy Murphy provides the answer and the way to calculate it for yourself in a July 26, 2016 posting on the Sustainable Nano blog ( a blog by the Center for Sustainable Nanotechnology),

Two years ago, I wrote a blog post called Two Ways to Make Nanoparticles, describing the difference between top-down and bottom-up methods for making nanoparticles. In the post I commented, “we can estimate, knowing how gold atoms pack into crystals, that there are about 2000 gold atoms in one 4 nm diameter gold nanoparticle.” Recently, a Sustainable Nano reader wrote in to ask about how this calculation is done. It’s a great question!

So, a 3.9 nm gold nanocrystal contains approximately 2000 gold atoms. (If you have time, do read Murphy’s description of how to determine the number of gold atoms in a gold nanoparticle.) So, this research does not answer the question posed by the 2015 research.

It may take years before researchers can devise tests for gold nanoclusters consisting of 102 atoms as opposed to nanoparticles consisting of 2000 atoms. In the meantime, here’s a link to and a citation for the latest on how gold reacts as we shrink the size of our electronics,

Nucleation of Dislocations in 3.9 nm Nanocrystals at High Pressure by Abhinav Parakh, Sangryun Lee, K. Anika Harkins, Mehrdad T. Kiani, David Doan, Martin Kunz, Andrew Doran, Lindsey A. Hanson, Seunghwa Ryu, and X. Wendy Gu. Phys. Rev. Lett. 124, 106104 DOI:https://doi.org/10.1103/PhysRevLett.124.106104 Published 13 March 2020 © 2020 American Physical Society

This paper is behind a paywall.

Tracking artificial intelligence

Researchers at Stanford University have developed an index for measuring (tracking) the progress made by artificial intelligence (AI) according to a January 9, 2018 news item on phys.org (Note: Links have been removed),

Since the term “artificial intelligence” (AI) was first used in print in 1956, the one-time science fiction fantasy has progressed to the very real prospect of driverless cars, smartphones that recognize complex spoken commands and computers that see.

In an effort to track the progress of this emerging field, a Stanford-led group of leading AI thinkers called the AI100 has launched an index that will provide a comprehensive baseline on the state of artificial intelligence and measure technological progress in the same way the gross domestic product and the S&P 500 index track the U.S. economy and the broader stock market.

For anyone curious about the AI100 initiative, I have a description of it in my Sept. 27, 2016 post highlighting the group’s first report or you can keep on reading.

Getting back to the matter at hand, a December 21, 2017 Stanford University press release by Andrew Myers, which originated the news item, provides more detail about the AI index,

“The AI100 effort realized that in order to supplement its regular review of AI, a more continuous set of collected metrics would be incredibly useful,” said Russ Altman, a professor of bioengineering and the faculty director of AI100. “We were very happy to seed the AI Index, which will inform the AI100 as we move forward.”

The AI100 was set in motion three years ago when Eric Horvitz, a Stanford alumnus and former president of the Association for the Advancement of Artificial Intelligence, worked with his wife, Mary Horvitz, to define and endow the long-term study. Its first report, released in the fall of 2016, sought to anticipate the likely effects of AI in an urban environment in the year 2030.

Among the key findings in the new index are a dramatic increase in AI startups and investment as well as significant improvements in the technology’s ability to mimic human performance.

Baseline metrics

The AI Index tracks and measures at least 18 independent vectors in academia, industry, open-source software and public interest, plus technical assessments of progress toward what the authors call “human-level performance” in areas such as speech recognition, question-answering and computer vision – algorithms that can identify objects and activities in 2D images. Specific metrics in the index include evaluations of academic papers published, course enrollment, AI-related startups, job openings, search-term frequency and media mentions, among others.

“In many ways, we are flying blind in our discussions about artificial intelligence and lack the data we need to credibly evaluate activity,” said Yoav Shoham, professor emeritus of computer science.

“The goal of the AI Index is to provide a fact-based measuring stick against which we can chart progress and fuel a deeper conversation about the future of the field,” Shoham said.

Shoham conceived of the index and assembled a steering committee including Ray Perrault from SRI International, Erik Brynjolfsson of the Massachusetts Institute of Technology and Jack Clark from OpenAI. The committee subsequently hired Calvin LeGassick as project manager.

“The AI Index will succeed only if it becomes a community effort,” Shoham said.

Although the authors say the AI Index is the first index to track either scientific or technological progress, there are many other non-financial indexes that provide valuable insight into equally hard-to-quantify fields. Examples include the Social Progress Index, the Middle East peace index and the Bangladesh empowerment index, which measure factors as wide-ranging as nutrition, sanitation, workload, leisure time, public sentiment and even public speaking opportunities.

Intriguing findings

Among the findings of this inaugural index is that the number of active AI startups has increased 14-fold since 2000. Venture capital investment has increased six times in the same period. In academia, publishing in AI has increased a similarly impressive nine times in the last 20 years while course enrollment has soared. Enrollment in the introductory AI-related machine learning course at Stanford, for instance, has grown 45-fold in the last 30 years.

In technical metrics, image and speech recognition are both approaching, if not surpassing, human-level performance. The authors noted that AI systems have excelled in such real-world applications as object detection, the ability to understand and answer questions and classification of photographic images of skin cancer cells

Shoham noted that the report is still very U.S.-centric and will need a greater international presence as well as a greater diversity of voices. He said he also sees opportunities to fold in government and corporate investment in addition to the venture capital funds that are currently included.

In terms of human-level performance, the AI Index suggests that in some ways AI has already arrived. This is true in game-playing applications including chess, the Jeopardy! game show and, most recently, the game of Go. Nonetheless, the authors note that computers continue to lag considerably in the ability to generalize specific information into deeper meaning.

“AI has made truly amazing strides in the past decade,” Shoham said, “but computers still can’t exhibit the common sense or the general intelligence of even a 5-year-old.”

The AI Index was made possible by funding from AI100, Google, Microsoft and Toutiao. Data supporting the various metrics were provided by Elsevier, TrendKite, Indeed.com, Monster.com, the Google Trends Team, the Google Brain Team, Sand Hill Econometrics, VentureSource, Crunchbase, Electronic Frontier Foundation, EuroMatrix, Geoff Sutcliffe, Kevin Leyton-Brown and Holger Hoose.

You can find the AI Index here. They’re featuring their 2017 report but you can also find data (on the menu bar on the upper right side of your screen), along with a few provisos. I was curious as to whether any AI had been used to analyze the data and/or write the report. A very cursory look at the 2017 report did not answer that question. I’m fascinated by the failure to address what I think is an obvious question. It suggests that even very, very bright people can become blind and I suspect that’s why the group seems quite eager to get others involved, from the 2017 AI Index Report,

As the report’s limitations illustrate, the AI Index will always paint a partial picture. For this reason, we include subjective commentary from a cross-section of AI experts. This Expert Forum helps animate the story behind the data in the report and adds interpretation the report lacks.

Finally, where the experts’ dialogue ends, your opportunity to Get Involved begins [emphasis mine]. We will need the feedback and participation of a larger community to address the issues identified in this report, uncover issues we have omitted, and build a productive process for tracking activity and progress in Artificial Intelligence. (p. 8)

Unfortunately, it’s not clear how one becomes involved. Is there a forum or do you get in touch with one of the team leaders?

I wish them good luck with their project and imagine that these minor hiccups will be dealt with in near term.

Public domain biotechnology: biological transistors from Stanford University

Andrew Myers’ Mar. 28, 2013 article for the Stanford School of Medicine’s magazine (Inside Stanford Medicine) profiles some research which stands as a bridge between electronics and biology and could lead to biological computing,

… now a team of Stanford University bioengineers has taken computing beyond mechanics and electronics into the living realm of biology. In a paper published March 28 in Science, the team details a biological transistor made from genetic material — DNA and RNA — in place of gears or electrons. The team calls its biological transistor the “transcriptor.”

“Transcriptors are the key component behind amplifying genetic logic — akin to the transistor and electronics,” said Jerome Bonnet, PhD, a postdoctoral scholar in bioengineering and the paper’s lead author.

Here’s a description of the transcriptor (biological transistor) and biological computers (from the article),

In electronics, a transistor controls the flow of electrons along a circuit. Similarly, in biologics, a transcriptor controls the flow of a specific protein, RNA polymerase, as it travels along a strand of DNA.

“We have repurposed a group of natural proteins, called integrases, to realize digital control over the flow of RNA polymerase along DNA, which in turn allowed us to engineer amplifying genetic logic,” said Endy [Drew Endy, PhD, assistant professor of bioengineering and the paper’s senior author].

Using transcriptors, the team has created what are known in electrical engineering as logic gates that can derive true-false answers to virtually any biochemical question that might be posed within a cell.

They refer to their transcriptor-based logic gates as “Boolean Integrase Logic,” or “BIL gates” for short.

Transcriptor-based gates alone do not constitute a computer, but they are the third and final component of a biological computer that could operate within individual living cells.

The article also offers a description of Boolean logic and the workings of standard computers,

Digital logic is often referred to as “Boolean logic,” after George Boole, the mathematician who proposed the system in 1854. Today, Boolean logic typically takes the form of 1s and 0s within a computer. Answer true, gate open; answer false, gate closed. Open. Closed. On. Off. 1. 0. It’s that basic. But it turns out that with just these simple tools and ways of thinking you can accomplish quite a lot.

“AND” and “OR” are just two of the most basic Boolean logic gates. An “AND” gate, for instance, is “true” when both of its inputs are true — when “a” and “b” are true. An “OR” gate, on the other hand, is true when either or both of its inputs are true.

In a biological setting, the possibilities for logic are as limitless as in electronics, Bonnet explained. “You could test whether a given cell had been exposed to any number of external stimuli — the presence of glucose and caffeine, for instance. BIL gates would allow you to make that determination and to store that information so you could easily identify those which had been exposed and which had not,” he said.

Here’s how they created a transcriptor (from the article),

To create transcriptors and logic gates, the team used carefully calibrated combinations of enzymes — the integrases mentioned earlier — that control the flow of RNA polymerase along strands of DNA. If this were electronics, DNA is the wire and RNA polymerase is the electron.

“The choice of enzymes is important,” Bonnet said. “We have been careful to select enzymes that function in bacteria, fungi, plants and animals, so that bio-computers can be engineered within a variety of organisms.”

On the technical side, the transcriptor achieves a key similarity between the biological transistor and its semiconducting cousin: signal amplification.

Refreshingly the team made this decision (from the article),

To bring the age of the biological computer to a much speedier reality, Endy and his team have contributed all of BIL gates to the public domain so that others can immediately harness and improve upon the tools.

“Most of biotechnology has not yet been imagined, let alone made true. By freely sharing important basic tools everyone can work better together,” Bonnet said.

Here’s a citation and a link to the researchers’ paper in Science,

Amplifying Genetic Logic Gates by Jerome Bonnet, Peter Yin, Monica E. Ortiz, Pakpoom Subsoontorn, and Drew Endy. Science 1232758 Published online 28 March 2013 [DOI:10.1126/science.1232758]

This paper is behind a paywall. As for Myers’ article, it’s well worth reading for its clear explanations and forays into computing history.