Observing individual silver nanoparticles in real time

A new technique for better understanding how silver nanoparticles might affect the environment was announced in a July 30, 2018 news item on ScienceDaily,

Chemists at Ruhr-Universität Bochum have developed a new method of observing the chemical reactions of individual silver nanoparticles, which only measure a thousandth of the thickness of a human hair, in real time. The particles are used in medicine, food and sports items because they have an antibacterial and anti-inflammatory effect. However, how they react and degrade in ecological and biological systems is so far barely understood. The team in the Research Group for Electrochemistry and Nanoscale Materials showed that the nanoparticles transform into poorly soluble silver chloride particles under certain conditions. The group led by Prof Dr Kristina Tschulik reports on the results in the Journal of the American Chemical Society from July 11, 2018.

A July 30,2018 Ruhr-University Bochum (RUB) press release (also on EurekAlert) by Julia Weiler, which originated the news item, provides more information,

Even under well-defined laboratory conditions, current research has yielded different, sometimes contradictory, results on the reaction of silver nanoparticles. “In every batch of nanoparticles, the individual properties of the particles, such as size and shape, vary,” says Kristina Tschulik, a member of the Cluster of Excellence Ruhr Explores Solvation. “With previous procedures, a myriad of particles was generally investigated at the same time, meaning that the effects of these variations could not be recorded. Or the measurements took place in a high vacuum, not under natural conditions in an aqueous solution.”

The team led by Kristina Tschulik thus developed a method that enables individual silver particles to be investigated in a natural environment. “Our aim is to be able to record the reactivity of individual particles,” explains the researcher. This requires a combination of electrochemical and spectroscopic methods. With optical and hyperspectral dark-field microscopy, the group was able to observe individual nanoparticles as visible and coloured pixels. Using the change in the colour of the pixels, or more precisely their spectral information, the researchers were able to follow what was happening in an electrochemical experiment in real time.

Degradation of the particles slowed down

In the experiment, the team replicated the oxidation of silver in the presence of chloride ions, which often takes place in ecological and biological systems. “Until now, it was generally assumed that the silver particles dissolve in the form of silver ions,” describes Kristina Tschulik. However, poorly soluble silver chloride was formed in the experiment – even if only a few chloride ions were present in the solution.

“This extends the lifespan of the nanoparticles to an extreme extent and their breakdown is slowed down in an unexpectedly drastic manner,” summarises Tschulik. “This is equally important for bodies of water and for living beings because this mechanism could cause the heavy metal silver to accumulate locally, which can be toxic for many organisms.”

Further development planned

The Bochum-based group now wants to further improve its technology for analysing individual nanoparticles in order to better understand the ageing mechanisms of such particles. The researchers thus want to obtain more information about the biocompatibility of the silver particles and the lifespan and ageing of catalytically active nanoparticles in the future.

Here’s a link to and a citation for the paper,

Simultaneous Opto- and Spectro-Electrochemistry: Reactions of Individual Nanoparticles Uncovered by Dark-Field Microscopy by Kevin Wonner, Mathies V. Evers, and Kristina Tschulik. J. Am. Chem. Soc., Article ASAP DOI: 10.1021/jacs.8b02367 Publication Date (Web): July 11, 2018

Copyright © 2018 American Chemical Society

This paper is behind a paywall.

Bringing memristors to the masses and cutting down on energy use

One of my earliest posts featuring memristors (May 9, 2008) focused on their potential for energy savings but since then most of my postings feature research into their application in the field of neuromorphic (brainlike) computing. (For a description and abbreviated history of the memristor go to this page on my Nanotech Mysteries Wiki.)

In a sense this July 30, 2018 news item on Nanowerk is a return to the beginning,

A new way of arranging advanced computer components called memristors on a chip could enable them to be used for general computing, which could cut energy consumption by a factor of 100.

This would improve performance in low power environments such as smartphones or make for more efficient supercomputers, says a University of Michigan researcher.

“Historically, the semiconductor industry has improved performance by making devices faster. But although the processors and memories are very fast, they can’t be efficient because they have to wait for data to come in and out,” said Wei Lu, U-M professor of electrical and computer engineering and co-founder of memristor startup Crossbar Inc.

Memristors might be the answer. Named as a portmanteau of memory and resistor, they can be programmed to have different resistance states–meaning they store information as resistance levels. These circuit elements enable memory and processing in the same device, cutting out the data transfer bottleneck experienced by conventional computers in which the memory is separate from the processor.

A July 30, 2018 University of Michigan news release (also on EurekAlert), which originated the news item, expands on the theme,

… unlike ordinary bits, which are 1 or 0, memristors can have resistances that are on a continuum. Some applications, such as computing that mimics the brain (neuromorphic), take advantage of the analog nature of memristors. But for ordinary computing, trying to differentiate among small variations in the current passing through a memristor device is not precise enough for numerical calculations.

Lu and his colleagues got around this problem by digitizing the current outputs—defining current ranges as specific bit values (i.e., 0 or 1). The team was also able to map large mathematical problems into smaller blocks within the array, improving the efficiency and flexibility of the system.

Computers with these new blocks, which the researchers call “memory-processing units,” could be particularly useful for implementing machine learning and artificial intelligence algorithms. They are also well suited to tasks that are based on matrix operations, such as simulations used for weather prediction. The simplest mathematical matrices, akin to tables with rows and columns of numbers, can map directly onto the grid of memristors.

The memristor array situated on a circuit board.

The memristor array situated on a circuit board. Credit: Mohammed Zidan, Nanoelectronics group, University of Michigan.

Once the memristors are set to represent the numbers, operations that multiply and sum the rows and columns can be taken care of simultaneously, with a set of voltage pulses along the rows. The current measured at the end of each column contains the answers. A typical processor, in contrast, would have to read the value from each cell of the matrix, perform multiplication, and then sum up each column in series.

“We get the multiplication and addition in one step. It’s taken care of through physical laws. We don’t need to manually multiply and sum in a processor,” Lu said.

His team chose to solve partial differential equations as a test for a 32×32 memristor array—which Lu imagines as just one block of a future system. These equations, including those behind weather forecasting, underpin many problems science and engineering but are very challenging to solve. The difficulty comes from the complicated forms and multiple variables needed to model physical phenomena.

When solving partial differential equations exactly is impossible, solving them approximately can require supercomputers. These problems often involve very large matrices of data, so the memory-processor communication bottleneck is neatly solved with a memristor array. The equations Lu’s team used in their demonstration simulated a plasma reactor, such as those used for integrated circuit fabrication.

This work is described in a study, “A general memristor-based partial differential equation solver,” published in the journal Nature Electronics.

It was supported by the Defense Advanced Research Projects Agency (DARPA) (grant no. HR0011-17-2-0018) and by the National Science Foundation (NSF) (grant no. CCF-1617315).

Here’s a link and a citation for the paper,

A general memristor-based partial differential equation solver by Mohammed A. Zidan, YeonJoo Jeong, Jihang Lee, Bing Chen, Shuo Huang, Mark J. Kushner & Wei D. Lu. Nature Electronicsvolume 1, pages411–420 (2018) DOI: https://doi.org/10.1038/s41928-018-0100-6 Published: 13 July 2018

This paper is behind a paywall.

For the curious, Dr. Lu’s startup company, Crossbar can be found here.

Artificial intelligence (AI) brings together International Telecommunications Union (ITU) and World Health Organization (WHO) and AI outperforms animal testing

Following on my May 11, 2018 posting about the International Telecommunications Union (ITU) and the 2018 AI for Good Global Summit in mid- May, there’s an announcement. My other bit of AI news concerns animal testing.

Leveraging the power of AI for health

A July 24, 2018 ITU press release (a shorter version was received via email) announces a joint initiative focused on improving health,

Two United Nations specialized agencies are joining forces to expand the use of artificial intelligence (AI) in the health sector to a global scale, and to leverage the power of AI to advance health for all worldwide. The International Telecommunication Union (ITU) and the World Health Organization (WHO) will work together through the newly established ITU Focus Group on AI for Health to develop an international “AI for health” standards framework and to identify use cases of AI in the health sector that can be scaled-up for global impact. The group is open to all interested parties.

“AI could help patients to assess their symptoms, enable medical professionals in underserved areas to focus on critical cases, and save great numbers of lives in emergencies by delivering medical diagnoses to hospitals before patients arrive to be treated,” said ITU Secretary-General Houlin Zhao. “ITU and WHO plan to ensure that such capabilities are available worldwide for the benefit of everyone, everywhere.”

The demand for such a platform was first identified by participants of the second AI for Good Global Summit held in Geneva, 15-17 May 2018. During the summit, AI and the health sector were recognized as a very promising combination, and it was announced that AI-powered technologies such as skin disease recognition and diagnostic applications based on symptom questions could be deployed on six billion smartphones by 2021.

The ITU Focus Group on AI for Health is coordinated through ITU’s Telecommunications Standardization Sector – which works with ITU’s 193 Member States and more than 800 industry and academic members to establish global standards for emerging ICT innovations. It will lead an intensive two-year analysis of international standardization opportunities towards delivery of a benchmarking framework of international standards and recommendations by ITU and WHO for the use of AI in the health sector.

“I believe the subject of AI for health is both important and useful for advancing health for all,” said WHO Director-General Tedros Adhanom Ghebreyesus.

The ITU Focus Group on AI for Health will also engage researchers, engineers, practitioners, entrepreneurs and policy makers to develop guidance documents for national administrations, to steer the creation of policies that ensure the safe, appropriate use of AI in the health sector.

“1.3 billion people have a mobile phone and we can use this technology to provide AI-powered health data analytics to people with limited or no access to medical care. AI can enhance health by improving medical diagnostics and associated health intervention decisions on a global scale,” said Thomas Wiegand, ITU Focus Group on AI for Health Chairman, and Executive Director of the Fraunhofer Heinrich Hertz Institute, as well as professor at TU Berlin.

He added, “The health sector is in many countries among the largest economic sectors or one of the fastest-growing, signalling a particularly timely need for international standardization of the convergence of AI and health.”

Data analytics are certain to form a large part of the ITU focus group’s work. AI systems are proving increasingly adept at interpreting laboratory results and medical imagery and extracting diagnostically relevant information from text or complex sensor streams.

As part of this, the ITU Focus Group for AI for Health will also produce an assessment framework to standardize the evaluation and validation of AI algorithms — including the identification of structured and normalized data to train AI algorithms. It will develop open benchmarks with the aim of these becoming international standards.

The ITU Focus Group for AI for Health will report to the ITU standardization expert group for multimedia, Study Group 16.

I got curious about Study Group 16 (from the Study Group 16 at a glance webpage),

Study Group 16 leads ITU’s standardization work on multimedia coding, systems and applications, including the coordination of related studies across the various ITU-T SGs. It is also the lead study group on ubiquitous and Internet of Things (IoT) applications; telecommunication/ICT accessibility for persons with disabilities; intelligent transport system (ITS) communications; e-health; and Internet Protocol television (IPTV).

Multimedia is at the core of the most recent advances in information and communication technologies (ICTs) – especially when we consider that most innovation today is agnostic of the transport and network layers, focusing rather on the higher OSI model layers.

SG16 is active in all aspects of multimedia standardization, including terminals, architecture, protocols, security, mobility, interworking and quality of service (QoS). It focuses its studies on telepresence and conferencing systems; IPTV; digital signage; speech, audio and visual coding; network signal processing; PSTN modems and interfaces; facsimile terminals; and ICT accessibility.

I wonder which group deals with artificial intelligence and, possibly, robots.

Chemical testing without animals

Thomas Hartung, professor of environmental health and engineering at Johns Hopkins University (US), describes in his July 25, 2018 essay (written for The Conversation) on phys.org the situation where chemical testing is concerned,

Most consumers would be dismayed with how little we know about the majority of chemicals. Only 3 percent of industrial chemicals – mostly drugs and pesticides – are comprehensively tested. Most of the 80,000 to 140,000 chemicals in consumer products have not been tested at all or just examined superficially to see what harm they may do locally, at the site of contact and at extremely high doses.

I am a physician and former head of the European Center for the Validation of Alternative Methods of the European Commission (2002-2008), and I am dedicated to finding faster, cheaper and more accurate methods of testing the safety of chemicals. To that end, I now lead a new program at Johns Hopkins University to revamp the safety sciences.

As part of this effort, we have now developed a computer method of testing chemicals that could save more than a US$1 billion annually and more than 2 million animals. Especially in times where the government is rolling back regulations on the chemical industry, new methods to identify dangerous substances are critical for human and environmental health.

Having written on the topic of alternatives to animal testing on a number of occasions (my December 26, 2014 posting provides an overview of sorts), I was particularly interested to see this in Hartung’s July 25, 2018 essay on The Conversation (Note: Links have been removed),

Following the vision of Toxicology for the 21st Century, a movement led by U.S. agencies to revamp safety testing, important work was carried out by my Ph.D. student Tom Luechtefeld at the Johns Hopkins Center for Alternatives to Animal Testing. Teaming up with Underwriters Laboratories, we have now leveraged an expanded database and machine learning to predict toxic properties. As we report in the journal Toxicological Sciences, we developed a novel algorithm and database for analyzing chemicals and determining their toxicity – what we call read-across structure activity relationship, RASAR.

This graphic reveals a small part of the chemical universe. Each dot represents a different chemical. Chemicals that are close together have similar structures and often properties. Thomas Hartung, CC BY-SA

To do this, we first created an enormous database with 10 million chemical structures by adding more public databases filled with chemical data, which, if you crunch the numbers, represent 50 trillion pairs of chemicals. A supercomputer then created a map of the chemical universe, in which chemicals are positioned close together if they share many structures in common and far where they don’t. Most of the time, any molecule close to a toxic molecule is also dangerous. Even more likely if many toxic substances are close, harmless substances are far. Any substance can now be analyzed by placing it into this map.

If this sounds simple, it’s not. It requires half a billion mathematical calculations per chemical to see where it fits. The chemical neighborhood focuses on 74 characteristics which are used to predict the properties of a substance. Using the properties of the neighboring chemicals, we can predict whether an untested chemical is hazardous. For example, for predicting whether a chemical will cause eye irritation, our computer program not only uses information from similar chemicals, which were tested on rabbit eyes, but also information for skin irritation. This is because what typically irritates the skin also harms the eye.

How well does the computer identify toxic chemicals?

This method will be used for new untested substances. However, if you do this for chemicals for which you actually have data, and compare prediction with reality, you can test how well this prediction works. We did this for 48,000 chemicals that were well characterized for at least one aspect of toxicity, and we found the toxic substances in 89 percent of cases.

This is clearly more accurate that the corresponding animal tests which only yield the correct answer 70 percent of the time. The RASAR shall now be formally validated by an interagency committee of 16 U.S. agencies, including the EPA [Environmental Protection Agency] and FDA [Food and Drug Administration], that will challenge our computer program with chemicals for which the outcome is unknown. This is a prerequisite for acceptance and use in many countries and industries.

The potential is enormous: The RASAR approach is in essence based on chemical data that was registered for the 2010 and 2013 REACH [Registration, Evaluation, Authorizations and Restriction of Chemicals] deadlines [in Europe]. If our estimates are correct and chemical producers would have not registered chemicals after 2013, and instead used our RASAR program, we would have saved 2.8 million animals and $490 million in testing costs – and received more reliable data. We have to admit that this is a very theoretical calculation, but it shows how valuable this approach could be for other regulatory programs and safety assessments.

In the future, a chemist could check RASAR before even synthesizing their next chemical to check whether the new structure will have problems. Or a product developer can pick alternatives to toxic substances to use in their products. This is a powerful technology, which is only starting to show all its potential.

It’s been my experience that these claims having led a movement (Toxicology for the 21st Century) are often contested with many others competing for the title of ‘leader’ or ‘first’. That said, this RASAR approach seems very exciting, especially in light of the skepticism about limiting and/or making animal testing unnecessary noted in my December 26, 2014 posting.it was from someone I thought knew better.

Here’s a link to and a citation for the paper mentioned in Hartung’s essay,

Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility by Thomas Luechtefeld, Dan Marsh, Craig Rowlands, Thomas Hartung. Toxicological Sciences, kfy152, https://doi.org/10.1093/toxsci/kfy152 Published: 11 July 2018

This paper is open access.

Quantum back action and devil’s play

I always appreciate a reference to James Clerk Maxwell’s demon thought experiment (you can find out about it in the Maxwell’s demon Wikipedia entry). This time it comes from physicist  Kater Murch in a July 23, 2018 Washington University in st. Louis (WUSTL) news release (published July 25, 2018 on EurekAlert) written by Brandie Jefferson (offering a good explanation of the thought experiment and more),

Thermodynamics is one of the most human of scientific enterprises, according to Kater Murch, associate professor of physics in Arts & Sciences at Washington University in St. Louis.

“It has to do with our fascination of fire and our laziness,” he said. “How can we get fire” — or heat — “to do work for us?”

Now, Murch and colleagues have taken that most human enterprise down to the intangible quantum scale — that of ultra low temperatures and microscopic systems — and discovered that, as in the macroscopic world, it is possible to use information to extract work.

There is a catch, though: Some information may be lost in the process.

“We’ve experimentally confirmed the connection between information in the classical case and the quantum case,” Murch said, “and we’re seeing this new effect of information loss.”

The results were published in the July 20 [2018] issue of Physical Review Letters.

The international team included Eric Lutz of the University of Stuttgart; J. J. Alonzo of the University of Erlangen-Nuremberg; Alessandro Romito of Lancaster University; and Mahdi Naghiloo, a Washington University graduate research assistant in physics.

That we can get energy from information on a macroscopic scale was most famously illustrated in a thought experiment known as Maxwell’s Demon. [emphasis mine] The “demon” presides over a box filled with molecules. The box is divided in half by a wall with a door. If the demon knows the speed and direction of all of the molecules, it can open the door when a fast-moving molecule is moving from the left half of the box to the right side, allowing it to pass. It can do the same for slow particles moving in the opposite direction, opening the door when a slow-moving molecule is approaching from the right, headed left. ­

After a while, all of the quickly-moving molecules are on the right side of the box. Faster motion corresponds to higher temperature. In this way, the demon has created a temperature imbalance, where one side of the box is hotter. That temperature imbalance can be turned into work — to push on a piston as in a steam engine, for instance. At first the thought experiment seemed to show that it was possible create a temperature difference without doing any work, and since temperature differences allow you to extract work, one could build a perpetual motion machine — a violation of the second law of thermodynamics.

“Eventually, scientists realized that there’s something about the information that the demon has about the molecules,” Murch said. “It has a physical quality like heat and work and energy.”

His team wanted to know if it would be possible to use information to extract work in this way on a quantum scale, too, but not by sorting fast and slow molecules. If a particle is in an excited state, they could extract work by moving it to a ground state. (If it was in a ground state, they wouldn’t do anything and wouldn’t expend any work).

But they wanted to know what would happen if the quantum particles were in an excited state and a ground state at the same time, analogous to being fast and slow at the same time. In quantum physics, this is known as a superposition.

“Can you get work from information about a superposition of energy states?” Murch asked. “That’s what we wanted to find out.”

There’s a problem, though. On a quantum scale, getting information about particles can be a bit … tricky.

“Every time you measure the system, it changes that system,” Murch said. And if they measured the particle to find out exactly what state it was in, it would revert to one of two states: excited, or ground.

This effect is called quantum backaction. To get around it, when looking at the system, researchers (who were the “demons”) didn’t take a long, hard look at their particle. Instead, they took what was called a “weak observation.” It still influenced the state of the superposition, but not enough to move it all the way to an excited state or a ground state; it was still in a superposition of energy states. This observation was enough, though, to allow the researchers track with fairly high accuracy, exactly what superposition the particle was in — and this is important, because the way the work is extracted from the particle depends on what superposition state it is in.

To get information, even using the weak observation method, the researchers still had to take a peek at the particle, which meant they needed light. So they sent some photons in, and observed the photons that came back.

“But the demon misses some photons,” Murch said. “It only gets about half. The other half are lost.” But — and this is the key — even though the researchers didn’t see the other half of the photons, those photons still interacted with the system, which means they still had an effect on it. The researchers had no way of knowing what that effect was.

They took a weak measurement and got some information, but because of quantum backaction, they might end up knowing less than they did before the measurement. On the balance, that’s negative information.

And that’s weird.

“Do the rules of thermodynamics for a macroscopic, classical world still apply when we talk about quantum superposition?” Murch asked. “We found that yes, they hold, except there’s this weird thing. The information can be negative.

“I think this research highlights how difficult it is to build a quantum computer,” Murch said.

“For a normal computer, it just gets hot and we need to cool it. In the quantum computer you are always at risk of losing information.”

Here’s a link to and a citation for the paper,

Information Gain and Loss for a Quantum Maxwell’s Demon by M. Naghiloo, J. J. Alonso, A. Romito, E. Lutz, and K. W. Murch. Phys. Rev. Lett. 121, 030604 (Vol. 121, Iss. 3 — 20 July 2018) DOI:https://doi.org/10.1103/PhysRevLett.121.030604 Published 17 July 2018

© 2018 American Physical Society

This paper is behind a paywall.

Watch a Physics Nobel Laureate make art on February 26, 2019 at Mobile World Congress 19 in Barcelona, Spain

Konstantin (Kostya) Novoselov (Nobel Prize in Physics 2010) strikes out artistically, again. The last time was in 2018 (see my August 13, 2018 posting about Novoselov’s project with artist Mary Griffiths).

This time around, Novoselov and artist, Kate Daudy, will be creating an art piece during a demonstration at the Mobile World Congress 19 (MWC 19) in Barcelona, Spain. From a February 21, 2019 news item on Azonano,

Novoselov is most popular for his revolutionary experiments on graphene, which is lightweight, flexible, stronger than steel, and more conductive when compared to copper. Due to this feat, Professors Andre Geim and Kostya Novoselov grabbed the Nobel Prize in Physics in 2010. Moreover, Novoselov is one of the founding principal researchers of the Graphene Flagship, which is a €1 billion research project funded by the European Commission.

At MWC 2019, Novoselov will join hands with British textile artist Kate Daudy, a collaboration which indicates his usual interest in art projects. During the show, the pair will produce a piece of art using materials printed with embedded graphene. The installation will be named “Everything is Connected,” the slogan of the Graphene Flagship and reflective of the themes at MWC 2019.

The demonstration will be held on Tuesday, February 26th, 2019 at 11:30 CET in the Graphene Pavilion, an area devoted to showcasing inventions accomplished by funding from the Graphene Flagship. Apart from the art demonstration, exhibitors in the Graphene Pavilion will demonstrate 26 modern graphene-based prototypes and devices that will revolutionize the future of telecommunications, mobile phones, home technology, and wearables.

A February 20, 2019 University of Manchester press release, which originated the news item, goes on to describe what might be called the real point of this exercise,

Interactive demonstrations include a selection of health-related wearable technologies, which will be exhibited in the ‘wearables of the future’ area. Prototypes in this zone include graphene-enabled pressure sensing insoles, which have been developed by Graphene Flagship researchers at the University of Cambridge to accurately identify problematic walking patterns in wearers.

Another prototype will demonstrate how graphene can be used to reduce heat in mobile phone batteries, therefore prolong their lifespan. In fact, the material required for this invention is the same that will be used during the art installation demonstration.

Andrea Ferrari, Science and Technology Officer and Chair of the management panel of the Graphene Flagship said: “Graphene and related layered materials have steadily progressed from fundamental to applied research and from the lab to the factory floor. Mobile World Congress is a prime opportunity for the Graphene Flagship to showcase how the European Commission’s investment in research is beginning to create tangible products and advanced prototypes. Outreach is also part of the Graphene Flagship mission and the interplay between graphene, culture and art has been explored by several Flagship initiatives over the years. This unique live exhibition of Kostya is a first for the Flagship and the Mobile World Congress, and I invite everybody to attend.”

More information on the Graphene Pavilion, the prototypes on show and the interactive demonstrations at MWC 2019, can be found on the press@graphene-flagship.euGraphene Flagship website. Alternatively, contact the Graphene Flagship directly on press@graphene-flagship.eu.

The Novoselov/Daudy project sounds as if they’ve drawn inspiration from performance art practices. In any case, it seems like a creative and fun way to engage the audience. For anyone curious about Kate Daudy‘s work,

[downloaded from https://katedaudy.com/]

‘Superconductivity: The Musical!’ wins the 2018 Dance Your Ph.D. competition

I can’t believe that October 24, 2011 was the last time the Dance Your Ph.D. competition was featured here. Time flies, eh? Here’s the 2018 contest winner’s submission, Superconductivity: The Musical!, (Note: This video is over 11 mins. long),

A February 17, 2019 CBC (Canadian Broadcasting Corporation) news item introduces the video’s writer, producer,s musician, and scientist,

Swing dancing. Songwriting. And theoretical condensed matter physics.

It’s a unique person who can master all three, but a University of Alberta PhD student has done all that and taken it one step further by making a rollicking music video about his academic pursuits — and winning an international competition for his efforts.

Pramodh Senarath Yapa is the winner of the 2018 Dance Your PhD contest, which challenges scientists around the world to explain their research through a jargon-free medium: dance.

The prize is $1,000 and “immortal geek fame.”

Yapa’s video features his friends twirling, swinging and touch-stepping their way through an explanation of his graduate research, called “Non-Local Electrodynamics of Superconducting Wires: Implications for Flux Noise and Inductance.”

Jennifer Ouelette’s February 17, 2019 posting for the ars Technica blog offers more detail (Note: A link has been removed),

Yapa’s research deals with how matter behaves when it’s cooled to very low temperatures, when quantum effects kick in—such as certain metals becoming superconductive, or capable of conducting electricity with zero resistance. That’s useful for any number of practical applications. D-Wave Systems [a company located in metro Vancouver {Canada}], for example, is building quantum computers using loops of superconducting wire. For his thesis, “I had to use the theory of superconductivity to figure out how to build a better quantum computer,” said Yapa.

Condensed matter theory (the precise description of Yapa’s field of research) is a notoriously tricky subfield to make palatable for a non-expert audience. “There isn’t one unifying theory or a single tool that we use,” he said. “Condensed matter theorists study a million different things using a million different techniques.”

His conceptual breakthrough came about when he realized electrons were a bit like “unsociable people” who find joy when they pair up with other electrons. “You can imagine electrons as a free gas, which means they don’t interact with each other,” he said. “The theory of superconductivity says they actually form pairs when cooled below a certain temperature. That was the ‘Eureka!’ moment, when I realized I could totally use swing dancing.”

John Bohannon’s Feb. 15, 2019 article for Science (magazine) offers an update on Yapa’s research interests (it seems that Yapa was dancing his Masters degree) and more information about the contest itself ,

..

“I remember hearing about Dance Your Ph.D. many years ago and being amazed at all the entries,” Yapa says. “This is definitely a longtime dream come true.” His research, meanwhile, has evolved from superconductivity—which he pursued at the University of Victoria in Canada, where he completed a master’s degree—to the physics of superfluids, the focus of his Ph.D. research at the University of Alberta.

This is the 11th year of Dance Your Ph.D. hosted by Science and AAAS. The contest challenges scientists around the world to explain their research through the most jargon-free medium available: interpretive dance.

“Most people would not normally think of interpretive dance as a tool for scientific communication,” says artist Alexa Meade, one of the judges of the contest. “However, the body can express conceptual thoughts through movement in ways that words and data tables cannot. The results are both artfully poetic and scientifically profound.”

Getting back to the February 17, 2019 CBC news item,

Yapa describes his video, filmed in Victoria where he earned his master’s degree, as a “three act, mini-musical.”

“I envisioned it as talking about the social lives of electrons,” he said. “The electrons starts out in a normal metal, at normal temperatures….We say these electrons are non-interacting. They don’t talk to each other. Electrons ignore each other and are very unsociable.”

The electrons — represented by dancers wearing saddle oxfords, poodle skirts, vests and suspenders — shuffle up the dance floor by themselves.

In the second act, the metal is cooled.

“The electrons become very unhappy about being alone. They want to find a partner, some companionship for the cold times,” he said

That’s when the electrons join up into something called Cooper pairs.

The dancers join together, moving to lyrics like, “If we peek/the Coopers are cheek-to-cheek.

In the final act, Yapa gets his dancers to demonstrate what happens when the Cooper pairs meet the impurities of the materials they’re moving in. All of a sudden, a group of black-leather-clad thugs move onto the dance floor.

“The Cooper pairs come dancing near these impurities and they’re like these crotchety old people yelling and shaking their fists at these young dancers,” Yapa explained.

Yapa’s entry to the annual contest swept past 49 other contestants to earn him the win. The competition is sponsored by Science magazine and the American Association for the Advancement of Science.

Congratulations to Pramodh Senarath Yapa.

An artistic feud over the blackest black (a coating material)

This artistic feud has its roots in a nanotechnology-enabled coating material known as Vantablack. Surrey Nanosystems in the UK sent me an announcement which I featured here in a March 14, 2016 posting. About one month later (in an April 16, 2016 posting regarding risks and an artistic controversy), I recounted the story of the controversy, which resulted from the company’s exclusive deal with artist, Sir Anish Kapoor (scroll down the post about 60% of the way to ‘Anish Kapoor and his exclusive rights to Vantablack’.

Apparently, the controversy led to an artistic feud between artists Stuart Semple and Kapoor. Outraged by the notion that only Kapoor could have access to the world’s blackest black, Semple created the world’s pinkest pink and stipulated that any artist in the world could have access to this colour—except Anish Kapoor.

Kapoor’s response can seen in a January 30,2019 article by Sarah Cascone for artnet.com,

… Semple started selling what he called “the world’s pinkest pink, available to anyone who wasn’t Kapoor.”

“I wanted to make a point about elitism and self-expression and the fact that everybody should be able to make art,” Semple said. But within weeks, “tragedy struck. Anish Kapoor got our pink! And he dipped his middle finger in it and put a picture on Instagram!”

[downloaded from http://www.artlyst.com/wp-content/uploads/2016/10/anish-kapoor-pink-1200x600_c.jpg]

Cascone’s article, which explores the history of the feud in greater detail also announces the latest installment (Note: Links have been removed),

In the battle over artistic access to the world’s blackest blacks, Stuart Semple isn’t backing down. The British artist, who took exception to Anish Kapoor’s exclusive contract to use Vantablack, the world’s blackest black substance, just launched a Kickstarter to produce a super dark paint of his own—and it has now been fully funded.

Jesus Diaz’s February 1, 2019 article for Fast Company provides some general technical details (Note: A link has been removed),

… Semple decided to team up with paint makers and about 1,000 artists to develop and test a competitor to Vantablack. His first version, Black 2.0, wasn’t quite as black as Vantablack, since it only absorbed 95% of the visible light (Vantablack absorbs about 99%).

Now, Black 3.0 is out and available on Kickstarter for about $32 per 150ml tube. According to Semple, it is the blackest, mattest, flattest acrylic paint available on the planet, capturing up to 99% of all the visible spectrum radiation. The paint is based on a new pigment called Black Magick, whose exact composition they aren’t disclosing. Black 3.0 is made up of this pigment, combined with a custom acrylic polymer. Semple and his colleagues claim that the polymer “is special because it has more available bonds than any other acrylic polymer being used in paints,” allowing more pigment density. The paint is then finished with what they claim are new “nano-mattifiers,” which remove any shine from the paint. Unlike Vantablack, the resulting paint is soluble in water and nontoxic. [emphasis mine]

I wonder what a ‘nano-mattifier’ might be. Regardless, I’m glad to see this new black is (with a nod to my April 16, 2016 posting about risks and this artistic controversy) nontoxic.

Semple’s ‘blackest black paint’ Kickstarter campaign can be found here. It ends on March 22, 2019 at 1:01 am PDT. The goal is $42,755 in Canadian dollars (CAD) and, as Iwrite this, they currently have $473,062 CAD in pledges.

I don’t usually embed videos that run over 5 mins. but Stuart Semple is very appealing in at least two senses of the word,

Thin-film electronic stickers for the Internet of Things (IoT)

This research is from Purdue University (Indiana, US) and the University of Virginia (US) increases and improves the interactivity between objects in what’s called the Internet of Things (IoT).

Caption: Electronic stickers can turn ordinary toy blocks into high-tech sensors within the ‘internet of things.’ Credit: Purdue University image/Chi Hwan Lee

From a July 16, 2018 news item on ScienceDaily,

Billions of objects ranging from smartphones and watches to buildings, machine parts and medical devices have become wireless sensors of their environments, expanding a network called the “internet of things.”

As society moves toward connecting all objects to the internet — even furniture and office supplies — the technology that enables these objects to communicate and sense each other will need to scale up.

Researchers at Purdue University and the University of Virginia have developed a new fabrication method that makes tiny, thin-film electronic circuits peelable from a surface. The technique not only eliminates several manufacturing steps and the associated costs, but also allows any object to sense its environment or be controlled through the application of a high-tech sticker.

Eventually, these stickers could also facilitate wireless communication. …

A July 16, 2018 University of Purdue news release (also on EurekAlert), which originated the news item, explains more,

“We could customize a sensor, stick it onto a drone, and send the drone to dangerous areas to detect gas leaks, for example,” said Chi Hwan Lee, Purdue assistant professor of biomedical engineering and mechanical engineering.

Most of today’s electronic circuits are individually built on their own silicon “wafer,” a flat and rigid substrate. The silicon wafer can then withstand the high temperatures and chemical etching that are used to remove the circuits from the wafer.

But high temperatures and etching damage the silicon wafer, forcing the manufacturing process to accommodate an entirely new wafer each time.

Lee’s new fabrication technique, called “transfer printing,” cuts down manufacturing costs by using a single wafer to build a nearly infinite number of thin films holding electronic circuits. Instead of high temperatures and chemicals, the film can peel off at room temperature with the energy-saving help of simply water.

“It’s like the red paint on San Francisco’s Golden Gate Bridge – paint peels because the environment is very wet,” Lee said. “So in our case, submerging the wafer and completed circuit in water significantly reduces the mechanical peeling stress and is environmentally-friendly.”

A ductile metal layer, such as nickel, inserted between the electronic film and the silicon wafer, makes the peeling possible in water. These thin-film electronics can then be trimmed and pasted onto any surface, granting that object electronic features.

Putting one of the stickers on a flower pot, for example, made that flower pot capable of sensing temperature changes that could affect the plant’s growth.

Lee’s lab also demonstrated that the components of electronic integrated circuits work just as well before and after they were made into a thin film peeled from a silicon wafer. The researchers used one film to turn on and off an LED light display.

“We’ve optimized this process so that we can delaminate electronic films from wafers in a defect-free manner,” Lee said.

This technology holds a non-provisional U.S. patent. The work was supported by the Purdue Research Foundation, the Air Force Research Laboratory (AFRL-S-114-054-002), the National Science Foundation (NSF-CMMI-1728149) and the University of Virginia.

The researchers have provided a video,

Here’s a link to and a citation for the paper,

Wafer-recyclable, environment-friendly transfer printing for large-scale thin-film nanoelectronics by Dae Seung Wie, Yue Zhang, Min Ku Kim, Bongjoong Kim, Sangwook Park, Young-Joon Kim, Pedro P. Irazoqui, Xiaolin Zheng, Baoxing Xu, and Chi Hwan Lee.
PNAS July 16, 2018 201806640 DOI: https://doi.org/10.1073/pnas.1806640115
published ahead of print July 16, 2018

This paper is behind a paywall.

Dexter Johnson provides some context in his July 25, 2018 posting on the Nanoclast blog (on the IEEE [Institute of Electronic and Electrical Engineers] website), Note: A link has been removed,

The Internet of Things (IoT), the interconnection of billions of objects and devices that will be communicating with each other, has been the topic of many futurists’ projections. However, getting the engineering sorted out with the aim of fully realizing the myriad visions for IoT is another story. One key issue to address: How do you get the electronics onto these devices efficiently and economically?

A team of researchers from Purdue University and the University of Virginia has developed a new manufacturing process that could make equipping a device with all the sensors and other electronics that will make it Internet capable as easily as putting a piece of tape on it.

… this new approach makes use of a water environment at room temperature to control the interfacial debonding process. This allows clean, intact delamination of prefabricated thin film devices when they’re pulled away from the original wafer.

The use of mechanical peeling in water rather than etching solution provides a number of benefits in the manufacturing scheme. Among them are simplicity, controllability, and cost effectiveness, says Chi Hwan Lee, assistant professor at Purdue University and coauthor of the paper chronicling the research.

If you have the time, do read Dexter’s piece. He always adds something that seems obvious in retrospect but wasn’t until he wrote it.

If only AI had a brain (a Wizard of Oz reference?)

The title, which I’ve borrowed from the news release, is the only Wizard of Oz reference that I can find but it works so well, you don’t really need anything more.

Moving onto the news, a July 23, 2018 news item on phys.org announces new work on developing an artificial synapse (Note: A link has been removed),

Digital computation has rendered nearly all forms of analog computation obsolete since as far back as the 1950s. However, there is one major exception that rivals the computational power of the most advanced digital devices: the human brain.

The human brain is a dense network of neurons. Each neuron is connected to tens of thousands of others, and they use synapses to fire information back and forth constantly. With each exchange, the brain modulates these connections to create efficient pathways in direct response to the surrounding environment. Digital computers live in a world of ones and zeros. They perform tasks sequentially, following each step of their algorithms in a fixed order.

A team of researchers from Pitt’s [University of Pittsburgh] Swanson School of Engineering have developed an “artificial synapse” that does not process information like a digital computer but rather mimics the analog way the human brain completes tasks. Led by Feng Xiong, assistant professor of electrical and computer engineering, the researchers published their results in the recent issue of the journal Advanced Materials (DOI: 10.1002/adma.201802353). His Pitt co-authors include Mohammad Sharbati (first author), Yanhao Du, Jorge Torres, Nolan Ardolino, and Minhee Yun.

A July 23, 2018 University of Pittsburgh Swanson School of Engineering news release (also on EurekAlert), which originated the news item, provides further information,

“The analog nature and massive parallelism of the brain are partly why humans can outperform even the most powerful computers when it comes to higher order cognitive functions such as voice recognition or pattern recognition in complex and varied data sets,” explains Dr. Xiong.

An emerging field called “neuromorphic computing” focuses on the design of computational hardware inspired by the human brain. Dr. Xiong and his team built graphene-based artificial synapses in a two-dimensional honeycomb configuration of carbon atoms. Graphene’s conductive properties allowed the researchers to finely tune its electrical conductance, which is the strength of the synaptic connection or the synaptic weight. The graphene synapse demonstrated excellent energy efficiency, just like biological synapses.

In the recent resurgence of artificial intelligence, computers can already replicate the brain in certain ways, but it takes about a dozen digital devices to mimic one analog synapse. The human brain has hundreds of trillions of synapses for transmitting information, so building a brain with digital devices is seemingly impossible, or at the very least, not scalable. Xiong Lab’s approach provides a possible route for the hardware implementation of large-scale artificial neural networks.

According to Dr. Xiong, artificial neural networks based on the current CMOS (complementary metal-oxide semiconductor) technology will always have limited functionality in terms of energy efficiency, scalability, and packing density. “It is really important we develop new device concepts for synaptic electronics that are analog in nature, energy-efficient, scalable, and suitable for large-scale integrations,” he says. “Our graphene synapse seems to check all the boxes on these requirements so far.”

With graphene’s inherent flexibility and excellent mechanical properties, these graphene-based neural networks can be employed in flexible and wearable electronics to enable computation at the “edge of the internet”–places where computing devices such as sensors make contact with the physical world.

“By empowering even a rudimentary level of intelligence in wearable electronics and sensors, we can track our health with smart sensors, provide preventive care and timely diagnostics, monitor plants growth and identify possible pest issues, and regulate and optimize the manufacturing process–significantly improving the overall productivity and quality of life in our society,” Dr. Xiong says.

The development of an artificial brain that functions like the analog human brain still requires a number of breakthroughs. Researchers need to find the right configurations to optimize these new artificial synapses. They will need to make them compatible with an array of other devices to form neural networks, and they will need to ensure that all of the artificial synapses in a large-scale neural network behave in the same exact manner. Despite the challenges, Dr. Xiong says he’s optimistic about the direction they’re headed.

“We are pretty excited about this progress since it can potentially lead to the energy-efficient, hardware implementation of neuromorphic computing, which is currently carried out in power-intensive GPU clusters. The low-power trait of our artificial synapse and its flexible nature make it a suitable candidate for any kind of A.I. device, which would revolutionize our lives, perhaps even more than the digital revolution we’ve seen over the past few decades,” Dr. Xiong says.

There is a visual representation of this artificial synapse,

Caption: Pitt engineers built a graphene-based artificial synapse in a two-dimensional, honeycomb configuration of carbon atoms that demonstrated excellent energy efficiency comparable to biological synapses Credit: Swanson School of Engineering

Here’s a link to and a citation for the paper,

Low‐Power, Electrochemically Tunable Graphene Synapses for Neuromorphic Computing by Mohammad Taghi Sharbati, Yanhao Du, Jorge Torres, Nolan D. Ardolino, Minhee Yun, Feng Xiong. Advanced Materials DOP: https://doi.org/10.1002/adma.201802353 First published [online]: 23 July 2018

This paper is behind a paywall.

I did look at the paper and if I understand it rightly, this approach is different from the memristor-based approaches that I have so often featured here. More than that I cannot say.

Finally, the Wizard of Oz song ‘If I Only Had a Brain’,

AI (artificial intelligence) text generator, too dangerous to release?

Could this latest version of OpenAI‘s text generator be so good that it would fool you? And following on that thought could the concomitant reluctance to release the research be real or is it a publicity stunt? Here’s a sample of the text from the GPT2 AI model from a February 15, 2019 article by Mark Fraunfelder for Boing Boing,

Recycling is good for the world.
NO! YOU COULD NOT BE MORE WRONG!!
MODEL COMPLETION (MACHINE-WRITTEN, 25 TRIES)
Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming. Recycling is not good for our health. It contributes to obesity and diseases like heart disease and cancer. Recycling is bad for our economy. It increases the cost of a product, and in turn, the price of everything that is made with that product. Recycling is not good for our nation. We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system. Recycling is a huge, colossal waste of time, energy, money, and resources. And THAT is why we need to get back to basics and get back to basics in our recycling efforts. One of the best ways to start is to look at the process of creating a paper product. When you make a paper product, it is basically a long chain of materials. Everything from the raw materials (wood, cardboard, paper, etc.), to the reagents (dyes, solvents, etc.) to the printing equipment (chemicals, glue, paper, ink, etc.), to the packaging, to the packaging materials (mercury, chemicals, etc.) to the processing equipment (heating, cooling, etc.), to the packaging materials, to the packaging materials that are shipped overseas and to the packaging materials that are used in the United States. Each step along the way creates tons of waste that we constantly have to clean up. The process of making a paper product is a very wasteful one. But the end result is something that all of us need to consume. And if we want to keep the recycling process running efficiently, then we really need to think about each and every step that goes into making a paper product.

The first few sentences don’t work for me but once the discussion turns to making paper products, then it becomes more convincing to me. As to whether the company’s reluctance to release the research is genuine or a publicity stunt, I don’t know. However, there was a fair degree of interest in GPT2 after the decision.

From a February 14, 2019 article by Alex Hern for the Guardian,

OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

Feed it the opening line of George Orwell’s Nineteen Eighty-Four – “It was a bright cold day in April, and the clocks were striking thirteen” – and the system recognises the vaguely futuristic tone and the novelistic style, and continues with: …

Sean Gallagher’s February 15, 2019 posting on the ars Technica blog provides some insight that’s partially written a style sometimes associated with gossip (Note: Links have been removed),

OpenAI is funded by contributions from a group of technology executives and investors connected to what some have referred to as the PayPal “mafia”—Elon Musk, Peter Thiel, Jessica Livingston, and Sam Altman of YCombinator, former PayPal COO and LinkedIn co-founder Reid Hoffman, and former Stripe Chief Technology Officer Greg Brockman. [emphasis mine] Brockman now serves as OpenAI’s CTO. Musk has repeatedly warned of the potential existential dangers posed by AI, and OpenAI is focused on trying to shape the future of artificial intelligence technology—ideally moving it away from potentially harmful applications.

Given present-day concerns about how fake content has been used to both generate money for “fake news” publishers and potentially spread misinformation and undermine public debate, GPT-2’s output certainly qualifies as concerning. Unlike other text generation “bot” models, such as those based on Markov chain algorithms, the GPT-2 “bot” did not lose track of what it was writing about as it generated output, keeping everything in context.

For example: given a two-sentence entry, GPT-2 generated a fake science story on the discovery of unicorns in the Andes, a story about the economic impact of Brexit, a report about a theft of nuclear materials near Cincinnati, a story about Miley Cyrus being caught shoplifting, and a student’s report on the causes of the US Civil War.

Each matched the style of the genre from the writing prompt, including manufacturing quotes from sources. In other samples, GPT-2 generated a rant about why recycling is bad, a speech written by John F. Kennedy’s brain transplanted into a robot (complete with footnotes about the feat itself), and a rewrite of a scene from The Lord of the Rings.

While the model required multiple tries to get a good sample, GPT-2 generated “good” results based on “how familiar the model is with the context,” the researchers wrote. “When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50 percent of the time. The opposite is also true: on highly technical or esoteric types of content, the model can perform poorly.”

There were some weak spots encountered in GPT-2’s word modeling—for example, the researchers noted it sometimes “writes about fires happening under water.” But the model could be fine-tuned to specific tasks and perform much better. “We can fine-tune GPT-2 on the Amazon Reviews dataset and use this to let us write reviews conditioned on things like star rating and category,” the authors explained.

James Vincent’s February 14, 2019 article for The Verge offers a deeper dive into the world of AI text agents and what makes GPT2 so special (Note: Links have been removed),

For decades, machines have struggled with the subtleties of human language, and even the recent boom in deep learning powered by big data and improved processors has failed to crack this cognitive challenge. Algorithmic moderators still overlook abusive comments, and the world’s most talkative chatbots can barely keep a conversation alive. But new methods for analyzing text, developed by heavyweights like Google and OpenAI as well as independent researchers, are unlocking previously unheard-of talents.

OpenAI’s new algorithm, named GPT-2, is one of the most exciting examples yet. It excels at a task known as language modeling, which tests a program’s ability to predict the next word in a given sentence. Give it a fake headline, and it’ll write the rest of the article, complete with fake quotations and statistics. Feed it the first line of a short story, and it’ll tell you what happens to your character next. It can even write fan fiction, given the right prompt.

The writing it produces is usually easily identifiable as non-human. Although its grammar and spelling are generally correct, it tends to stray off topic, and the text it produces lacks overall coherence. But what’s really impressive about GPT-2 is not its fluency but its flexibility.

This algorithm was trained on the task of language modeling by ingesting huge numbers of articles, blogs, and websites. By using just this data — and with no retooling from OpenAI’s engineers — it achieved state-of-the-art scores on a number of unseen language tests, an achievement known as “zero-shot learning.” It can also perform other writing-related tasks, like translating text from one language to another, summarizing long articles, and answering trivia questions.

GPT-2 does each of these jobs less competently than a specialized system, but its flexibility is a significant achievement. Nearly all machine learning systems used today are “narrow AI,” meaning they’re able to tackle only specific tasks. DeepMind’s original AlphaGo program, for example, was able to beat the world’s champion Go player, but it couldn’t best a child at Monopoly. The prowess of GPT-2, say OpenAI, suggests there could be methods available to researchers right now that can mimic more generalized brainpower.

“What the new OpenAI work has shown is that: yes, you absolutely can build something that really seems to ‘understand’ a lot about the world, just by having it read,” says Jeremy Howard, a researcher who was not involved with OpenAI’s work but has developed similar language modeling programs …

To put this work into context, it’s important to understand how challenging the task of language modeling really is. If I asked you to predict the next word in a given sentence — say, “My trip to the beach was cut short by bad __” — your answer would draw upon on a range of knowledge. You’d consider the grammar of the sentence and its tone but also your general understanding of the world. What sorts of bad things are likely to ruin a day at the beach? Would it be bad fruit, bad dogs, or bad weather? (Probably the latter.)

Despite this, programs that perform text prediction are quite common. You’ve probably encountered one today, in fact, whether that’s Google’s AutoComplete feature or the Predictive Text function in iOS. But these systems are drawing on relatively simple types of language modeling, while algorithms like GPT-2 encode the same information in more complex ways.

The difference between these two approaches is technically arcane, but it can be summed up in a single word: depth. Older methods record information about words in only their most obvious contexts, while newer methods dig deeper into their multiple meanings.

So while a system like Predictive Text only knows that the word “sunny” is used to describe the weather, newer algorithms know when “sunny” is referring to someone’s character or mood, when “Sunny” is a person, or when “Sunny” means the 1976 smash hit by Boney M.

The success of these newer, deeper language models has caused a stir in the AI community. Researcher Sebastian Ruder compares their success to advances made in computer vision in the early 2010s. At this time, deep learning helped algorithms make huge strides in their ability to identify and categorize visual data, kickstarting the current AI boom. Without these advances, a whole range of technologies — from self-driving cars to facial recognition and AI-enhanced photography — would be impossible today. This latest leap in language understanding could have similar, transformational effects.

Hern’s article for the Guardian (February 14, 2019 article ) acts as a good overview, while Gallagher’s ars Technical posting (February 15, 2019 posting) and Vincent’s article (February 14, 2019 article) for the The Verge take you progressively deeper into the world of AI text agents.

For anyone who wants to dig down even further, there’s a February 14, 2019 posting on OpenAI’s blog.