Monthly Archives: December 2021

Charles Lieber, nanoscientist, and the US Dept. of Justice

Charles Lieber, professor at Harvard University and one of the world’s leading researchers in nanotechnology went on trial on Tuesday, December 14, 2021.

Accused of hiding his ties to a People’s Republic of China (PRC)-run recruitment programme, Lieber is probably the highest profile academic and one of the few who was not born in China or has familial origins in China to be charged under the auspices of the US Department of Justice’s ‘China Initiative’.

This US National Public Radio (NPR) December 14, 2021 audio excerpt provides a brief summary of the situation by Ryan Lucas,

A December 14, 2021 article by Jess Aloe, Eileen Guo, and Antonio Regalado for the Massachusetts Institute of Technology (MIT) Technology Review lays out the situation in more detail (Note: A link has been removed),

In January of 2020, agents arrived at Harvard University looking for Charles Lieber, a renowned nanotechnology researcher who chaired the school’s department of chemistry and chemical biology. They were there to arrest him on charges of hiding his financial ties with a university in China. By arresting Lieber steps from Harvard Yard, authorities were sending a loud message to the academic community: failing to disclose such links is a serious crime.

Now Lieber is set to go on trial beginning December 14 [2021] in federal court in Boston. He has pleaded not guilty, and hundreds of academics have signed letters of support. In fact, some critics say it’s the Justice Department’s China Initiative—a far-reaching effort started in 2018 to combat Chinese economic espionage and trade-secret theft—that should be on trial, not Lieber. They are calling the prosecutions fundamentally flawed, a witch hunt that misunderstands the open-book nature of basic science and that is selectively destroying scientific careers over financial misdeeds and paperwork errors without proof of actual espionage or stolen technology.

For their part, prosecutors believe they have a tight case. They allege that Lieber was recruited into China’s Thousand Talents Plan—a program aimed at attracting top scientists—and paid handsomely to establish a research laboratory at the Wuhan University of Technology, but hid the affiliation from US grant agencies when asked about it (read a copy of the indictment here). Lieber is facing six felony charges: two counts of making false statements to investigators, two counts of filing a false tax return, and two counts of failing to report a foreign bank account. [emphases mine; Note: None of these charges have been proved in court]

The case against Lieber could be a bellwether for the government, which has several similar cases pending against US professors alleging that they didn’t disclose their China affiliations to granting agencies.

As for the China Initiative (from the MIT Technology Review December 14, 2021 article),

The China Initiative was announced in 2018 by Jeff Sessions, then the Trump administration’s attorney general, as a central component of the administration’s tough stance toward China.

An MIT Technology Review investigation published earlier this month [December 2021] found that the China Initiative is an umbrella for various types of prosecutions somehow connected to China, with targets ranging from a Chinese national who ran a turtle-smuggling ring to state-sponsored hackers believed to be behind some of the biggest data breaches in history. In total, MIT Technology Review identified 77 cases brought under the initiative; of those, a quarter have led to guilty pleas or convictions, but nearly two-thirds remain pending.

The government’s prosecution of researchers like Lieber for allegedly hiding ties to Chinese institutions has been the most controversial, and fastest-growing, aspect of the government’s efforts. In 2020, half of the 31 new cases brought under the China Initiative were cases against scientists or researchers. These cases largely did not accuse the defendants of violating the Economic Espionage Act.

… hundreds of academics across the country, from institutions including Stanford University and Princeton University,signed a letter calling on Attorney General Merrick Garland to end the China Initiative. The initiative, they wrote, has drifted from its original mission of combating Chinese intellectual-property theft and is instead harming American research competitiveness by discouraging scholars from coming to or staying in the US.

Lieber’s case is the second [emphasis mine] China Initiative prosecution of an academic to end up in the courtroom. The only previous person to face trial [emphasis mine] on research integrity charges, University of Tennessee–Knoxville professor Anming Hu, was acquitted of all charges [emphasis mine] by a judge in June [2021] after a deadlocked jury led to a mistrial.

Ken Dilanian wrote an October 19, 2021 article for (US) National Broadcasting Corporation’s (NBC) news online about Hu’s eventual acquittal and about the China Inititative (Note: Dilanian’s timeline for the acquittal differs from the timeline in the MIT Technology Review),

The federal government brought the full measure of its legal might against Anming Hu, a nanotechnology expert at the University of Tennessee.

But the Justice Department’s efforts to convict Hu as part of its program to crack down on illicit technology transfer to China failed — spectacularly. A judge acquitted him last month [September 2021] after a lengthy trial offered little evidence of anything other than a paperwork misunderstanding, according to local newspaper coverage. It was the second trial, after the first ended in a hung jury.

“The China Initiative has turned up very little by way of clear espionage and the transfer of genuinely strategic information to the PRC,” said Robert Daly, a China expert at the Wilson Center, referring to the country by its formal name, the People’s Republic of China. “They are mostly process crimes, disclosure issues. A growing number of voices are calling for an end to the China initiative because it’s seen as discriminatory.”

The China Initiative began under President Donald Trump’s attorney general, Jeff Sessions, in 2018. But concerns about Chinese espionage in the United States — and the transfer of technology to China through business and academic relationships — are bipartisan.

John Demers, who departed in June [2021] as head of the Justice Department’s National Security Division, said in an interview that the problem of technology transfer at universities is real. But he said he also believes conflict of interest and disclosure rules were not rigorously enforced for many years. For that reason, he recommended an amnesty program offering academics with undisclosed foreign ties a chance to come clean and avoid penalties. So far, the Biden administration has not implemented such a program.

When I first featured the Lieber case in a January 28, 2020 posting I was more focused on the financial elements,

ETA January 28, 2020 at 1645 hours: I found a January 28, 2020 article by Antonio Regalado for the MIT Technology Review which provides a few more details about Lieber’s situation,

“…

Big money: According to the charging document, Lieber, starting in 2011,  agreed to help set up a research lab at the Wuhan University of Technology and “make strategic visionary and creative research proposals” so that China could do cutting-edge science.

He was well paid for it. Lieber earned a salary when he visited China worth up to $50,000 per month, as well as $150,000 a year in expenses in addition to research funds. According to the complaint, he got paid by way of a Chinese bank account but also was known to send emails asking for cash instead.

Harvard eventually wised up to the existence of a Wuhan lab using its name and logo, but when administrators confronted Lieber, he lied and said he didn’t know about a formal joint program, according to the government complaint.

This is messy not least because Lieber and the members of his Harvard lab have done some extraordinary work as per my November 15, 2019 (Human-machine interfaces and ultra-small nanoprobes) posting about injectable electronics.

Ai-Da (robot artist) writes and performs poem honouring Dante’s 700th anniversary

Remarkable, eh? *ETA December 17, 2021 0910: I’m sorry about the big blank space and can’t figure out how to fix it.*

Who is Ai-Da?

Thank you to the contributor(s) of the Ai-Da (robot) Wikipedia entry (Note: Links have been removed),

Ai-Da was invented by gallerist Aidan Meller,[3] in collaboration with Engineered Arts, a Cornish robotics company.[4] Her drawing intelligence was developed by computer AI researchers at the University of Oxford,[5] and her drawing arm is the work of engineers based in Leeds.[4]

Ai-Da has her own website here (from the homepage),

Ai-Da is the world’s first ultra-realistic artist robot. She draws using cameras in her eyes, her AI algorithms, and her robotic arm. Created in February 2019, she had her first solo show at the University of Oxford, ‘Unsecured Futures’, where her [visual] art encouraged viewers to think about our rapidly changing world. She has since travelled and exhibited work internationally, and had her first show in a major museum, the Design Museum, in 2021. She continues to create art that challenges our notions of creativity in a post-humanist era.

Ai-Da – is it art?

The role and definition of art changes over time. Ai-Da’s work is art, because it reflects the enormous integration of technology in todays society. We recognise ‘art’ means different things to different people. 

Today, a dominant opinion is that art is created by the human, for other humans. This has not always been the case. The ancient Greeks felt art and creativity came from the Gods. Inspiration was divine inspiration. Today, a dominant mind-set is that of humanism, where art is an entirely human affair, stemming from human agency. However, current thinking suggests we are edging away from humanism, into a time where machines and algorithms influence our behaviour to a point where our ‘agency’ isn’t just our own. It is starting to get outsourced to the decisions and suggestions of algorithms, and complete human autonomy starts to look less robust. Ai-Da creates art, because art no longer has to be restrained by the requirement of human agency alone.  

It seems that Ai-Da has branched out from visual art into poetry. (I wonder how many of the arts Ai-Da can produce and/or perform?)

A divine comedy? Dante and Ai-Da

The 700th anniversary of poet Dante Alighieri’s death has occasioned an exhibition, DANTE: THE INVENTION OF CELEBRITY, 17 September 2021–9 January 2022, at Oxford’s Ashmolean Museum.

Professor Gervase Rosser (University of Oxford), exhibition curator, wrote this in his September 21, 2021 exhibition essay “Dante and the Robot: An encounter at the Ashmolean”,

Ai-Da, the world’s most modern humanoid artist, is involved in an exhibition about the poet and philosopher, Dante Alighieri, writer of the Divine Comedy, whose 700th anniversary is this year. A major exhibition, ‘Dante and the Invention of Celebrity’, opens at Oxford’s Ashmolean Museum this month, and includes an intervention by this most up-to-date robot artist.

..

Honours are being paid around the world to the author of what he called a Comedy because, unlike a tragedy, it began badly but ended well. From the darkness of hell, the work sees Dante journey through purgatory, before eventually arriving at the eternal light of paradise. What hold does a poem about the spiritual redemption of humanity, written so long ago, have on us today?

One challenge to both spirit and humanity in the 21st century is the power of artificial intelligence, created and unleashed by human ingenuity.  The scientists who introduced this term, AI, in the 1950s announced that ‘every aspect of learning or any other feature of intelligence can, in principle, be so precisely described that a machine can be made to simulate it’.

Over the course of a human lifetime, that prophecy has almost been realised.  Artificial intelligence has already taken the place of human thought, often in ways of which are not apparent. In medicine, AI promises to become both irreplaceable and inestimable.

But to an extent which we are, perhaps, frightened to acknowledge, AI monitors our consumption patterns, our taste in everything from food to culture, our perception of ourselves, even our political views. If we want to re-orientate ourselves and take a critical view of this, before it is too late to regain control, how can we do so?

Creative fiction offers a field in which our values and aspirations can be questioned. This year has seen the publication of Klara and the Sun, by Kazuo Ishiguro, which evokes a world, not many years into the future, in which humanoid AI robots have become the domestic servants and companions of all prosperous families.

One of the book’s characters asks a fundamental question about the human heart, ‘Do you think there is such a thing? Something that makes each of us special and individual?’

Art can make two things possible: through it, artificial intelligence, which remains largely unseen, can be made visible and tangible and it can be given a prophetic voice, which we can choose to heed or ignore.

These aims have motivated the creators of Ai-Da, the artist robot which, through a series of exhibitions, is currently provoking questions around the globe (from the United Nations headquarters in Geneva to Cairo, and from the Design Museum in London [UK] to Abu Dhabi) about the nature of human creativity, originality, and authenticity.

In the Ashmolean Museum’s Gallery 8, Dante  meets artificial intelligence, in a staged encounter deliberately designed to invite reflection on what it means to see the world; on the nature of creativity; and on the value of human relationships.

The juxtaposition of AI with the Divine Comedy, in a year in which the poem is being celebrated as a supreme achievement of the human spirit, is timely. The encounter, however, is not presented as a clash of incompatible opposites, but as a conversation.

This is the spirit in which Ai-Da has been developed by her inventors, Aidan Meller and Lucy Seal, in collaboration with technical teams in Oxford University and elsewhere. Significantly, she takes her name from Ada Lovelace [emphasis mine], a mathematician and writer who was belatedly recognised as the first programmer. At the time of her early death in 1852, at the age of 36, she was considering writing a visionary kind of mathematical poetry, and wrote about her idea of ‘poetical philosophy, poetical science’.

For the Ashmolean exhibition, Ai-Da has made works in response to the Comedy. The first focuses on one of the circles of Dante’s Purgatory. Here, the souls of the envious compensate for their lives on earth, which were partially, but not irredeemably, marred by their frustrated desire for the possessions of others.

My first thought on seeing the inventor’s name, Aidan Meller, was that he named the robot after himself; I did not pick up on the Ada Lovelace connection. I appreciate how smart this is especially as the name also references AI.

Finally, the excerpts don’t do justice to Rosser’s essay; I recommend reading it if you have the time.

Creating time crystals with a quantum computer

This November 30, 2021 news item on phys.org about time crystals caught my attention,

There is a huge global effort to engineer a computer capable of harnessing the power of quantum physics to carry out computations of unprecedented complexity. While formidable technological obstacles still stand in the way of creating such a quantum computer, today’s early prototypes are still capable of remarkable feats.

For example, the creation of a new phase of matter called a “time crystal.” Just as a crystal’s structure repeats in space, a time crystal repeats in time and, importantly, does so infinitely and without any further input of energy—like a clock that runs forever without any batteries. The quest to realize this phase of matter has been a longstanding challenge in theory and experiment—one that has now finally come to fruition.

In research published Nov. 30 [2021] in Nature, a team of scientists from Stanford University, Google Quantum AI, the Max Planck Institute for Physics of Complex Systems and Oxford University detail their creation of a time crystal using Google’s Sycamore quantum computing hardware.

The Google Sycamore chip used in the creation of a time crystal. Credit: Google Quantum AI [downloaded from https://phys.org/news/2021-11-physicists-crystals-quantum.html]

A November 30, 2021 Stanford University news release (also on EurekAlert) by Taylor Kubota, which originated the news item, delves further into the work and into the nature of time crystals,

“The big picture is that we are taking the devices that are meant to be the quantum computers of the future and thinking of them as complex quantum systems in their own right,” said Matteo Ippoliti, a postdoctoral scholar at Stanford and co-lead author of the work. “Instead of computation, we’re putting the computer to work as a new experimental platform to realize and detect new phases of matter.”

For the team, the excitement of their achievement lies not only in creating a new phase of matter but in opening up opportunities to explore new regimes in their field of condensed matter physics, which studies the novel phenomena and properties brought about by the collective interactions of many objects in a system. (Such interactions can be far richer than the properties of the individual objects.)

“Time-crystals are a striking example of a new type of non-equilibrium quantum phase of matter,” said Vedika Khemani, assistant professor of physics at Stanford and a senior author of the paper. “While much of our understanding of condensed matter physics is based on equilibrium systems, these new quantum devices are providing us a fascinating window into new non-equilibrium regimes in many-body physics.”

What a time crystal is and isn’t

The basic ingredients to make this time crystal are as follows: The physics equivalent of a fruit fly and something to give it a kick. The fruit fly of physics is the Ising model, a longstanding tool for understanding various physical phenomena – including phase transitions and magnetism – which consists of a lattice where each site is occupied by a particle that can be in two states, represented as a spin up or down.

During her graduate school years, Khemani, her doctoral advisor Shivaji Sondhi, then at Princeton University, and Achilleas Lazarides and Roderich Moessner at the Max Planck Institute for Physics of Complex Systems stumbled upon this recipe for making time crystals unintentionally. They were studying non-equilibrium many-body localized systems – systems where the particles get “stuck” in the state in which they started and can never relax to an equilibrium state. They were interested in exploring phases that might develop in such systems when they are periodically “kicked” by a laser. Not only did they manage to find stable non-equilibrium phases, they found one where the spins of the particles flipped between patterns that repeat in time forever, at a period twice that of the driving period of the laser, thus making a time crystal.

The periodic kick of the laser establishes a specific rhythm to the dynamics. Normally the “dance” of the spins should sync up with this rhythm, but in a time crystal it doesn’t. Instead, the spins flip between two states, completing a cycle only after being kicked by the laser twice. This means that the system’s “time translation symmetry” is broken. Symmetries play a fundamental role in physics, and they are often broken – explaining the origins of regular crystals, magnets and many other phenomena; however, time translation symmetry stands out because unlike other symmetries, it can’t be broken in equilibrium. The periodic kick is a loophole that makes time crystals possible.

The doubling of the oscillation period is unusual, but not unprecedented. And long-lived oscillations are also very common in the quantum dynamics of few-particle systems. What makes a time crystal unique is that it’s a system of millions of things that are showing this kind of concerted behavior without any energy coming in or leaking out.

“It’s a completely robust phase of matter, where you’re not fine-tuning parameters or states but your system is still quantum,” said Sondhi, professor of physics at Oxford and co-author of the paper. “There’s no feed of energy, there’s no drain of energy, and it keeps going forever and it involves many strongly interacting particles.”

While this may sound suspiciously close to a “perpetual motion machine,” a closer look reveals that time crystals don’t break any laws of physics. Entropy – a measure of disorder in the system – remains stationary over time, marginally satisfying the second law of thermodynamics by not decreasing.

Between the development of this plan for a time crystal and the quantum computer experiment that brought it to reality, many experiments by many different teams of researchers achieved various almost-time-crystal milestones. However, providing all the ingredients in the recipe for “many-body localization” (the phenomenon that enables an infinitely stable time crystal) had remained an outstanding challenge.

For Khemani and her collaborators, the final step to time crystal success was working with a team at Google Quantum AI. Together, this group used Google’s Sycamore quantum computing hardware to program 20 “spins” using the quantum version of a classical computer’s bits of information, known as qubits.

Revealing just how intense the interest in time crystals currently is, another time crystal was published in Science this month [November 2021]. That crystal was created using qubits within a diamond by researchers at Delft University of Technology in the Netherlands.

Quantum opportunities

The researchers were able to confirm their claim of a true time crystal thanks to special capabilities of the quantum computer. Although the finite size and coherence time of the (imperfect) quantum device meant that their experiment was limited in size and duration – so that the time crystal oscillations could only be observed for a few hundred cycles rather than indefinitely – the researchers devised various protocols for assessing the stability of their creation. These included running the simulation forward and backward in time and scaling its size.

“We managed to use the versatility of the quantum computer to help us analyze its own limitations,” said Moessner, co-author of the paper and director at the Max Planck Institute for Physics of Complex Systems. “It essentially told us how to correct for its own errors, so that the fingerprint of ideal time-crystalline behavior could be ascertained from finite time observations.”

A key signature of an ideal time crystal is that it shows indefinite oscillations from all states. Verifying this robustness to choice of states was a key experimental challenge, and the researchers devised a protocol to probe over a million states of their time crystal in just a single run of the machine, requiring mere milliseconds of runtime. This is like viewing a physical crystal from many angles to verify its repetitive structure.

“A unique feature of our quantum processor is its ability to create highly complex quantum states,” said Xiao Mi, a researcher at Google and co-lead author of the paper. “These states allow the phase structures of matter to be effectively verified without needing to investigate the entire computational space – an otherwise intractable task.”

Creating a new phase of matter is unquestionably exciting on a fundamental level. In addition, the fact that these researchers were able to do so points to the increasing usefulness of quantum computers for applications other than computing. “I am optimistic that with more and better qubits, our approach can become a main method in studying non-equilibrium dynamics,” said Pedram Roushan, researcher at Google and senior author of the paper.

“We think that the most exciting use for quantum computers right now is as platforms for fundamental quantum physics,” said Ippoliti. “With the unique capabilities of these systems, there’s hope that you might discover some new phenomenon that you hadn’t predicted.”

A view of the Google dilution refrigerator, which houses the Sycamore chip. Credit: Google Quantum AI [downloaded from https://scitechdaily.com/stanford-and-google-team-up-to-create-time-crystals-with-quantum-computers/]

Here’s a link to and a citation for the paper,

Time-Crystalline Eigenstate Order on a Quantum Processor by Xiao Mi, Matteo Ippoliti, Chris Quintana, Ami Greene, Zijun Chen, Jonathan Gross, Frank Arute, Kunal Arya, Juan Atalaya, Ryan Babbush, Joseph C. Bardin, Joao Basso, Andreas Bengtsson, Alexander Bilmes, Alexandre Bourassa, Leon Brill, Michael Broughton, Bob B. Buckley, David A. Buell, Brian Burkett, Nicholas Bushnell, Benjamin Chiaro, Roberto Collins, William Courtney, Dripto Debroy, Sean Demura, Alan R. Derk, Andrew Dunsworth, Daniel Eppens, Catherine Erickson, Edward Farhi, Austin G. Fowler, Brooks Foxen, Craig Gidney, Marissa Giustina, Matthew P. Harrigan, Sean D. Harrington, Jeremy Hilton, Alan Ho, Sabrina Hong, Trent Huang, Ashley Huff, William J. Huggins, L. B. Ioffe, Sergei V. Isakov, Justin Iveland, Evan Jeffrey, Zhang Jiang, Cody Jones, Dvir Kafri, Tanuj Khattar, Seon Kim, Alexei Kitaev, Paul V. Klimov, Alexander N. Korotkov, Fedor Kostritsa, David Landhuis, Pavel Laptev, Joonho Lee, Kenny Lee, Aditya Locharla, Erik Lucero, Orion Martin, Jarrod R. McClean, Trevor McCourt, Matt McEwen, Kevin C. Miao, Masoud Mohseni, Shirin Montazeri, Wojciech Mruczkiewicz, Ofer Naaman, Matthew Neeley, Charles Neill, Michael Newman, Murphy Yuezhen Niu, Thomas E. O’Brien, Alex Opremcak, Eric Ostby, Balint Pato, Andre Petukhov, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vladimir Shvarts, Yuan Su, Doug Strain, Marco Szalay, Matthew D. Trevithick, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Juhwan Yoo, Adam Zalcman, Hartmut Neven, Sergio Boixo, Vadim Smelyanskiy, Anthony Megrant, Julian Kelly, Yu Chen, S. L. Sondhi, Roderich Moessner, Kostyantyn Kechedzhi, Vedika Khemani & Pedram Roushan. Nature (2021) DOI: https://doi.org/10.1038/s41586-021-04257-w Published 30 November 2021

This is a preview of the unedited paper being provided by Nature. Click on the Download PDF button (to the right of the title) to get access.

Brilliant colours in electronic paper displays

Researchers at Chalmers University of Technology (Sweden) have taken a step forward towards making science fiction writers’ fantasies of reading paper-like electronic displays outdoors under the sun reality with a new technique that results in more brilliant colour displays.

Caption: A new design from Chalmers University of Technology could help produce e-readers, advertising signs and other digital screens with optimal colour display and minimal energy consumption Credit: Image:, Marika Gugole/Chalmers University of Technology

From a July 12, 2021 Chalmers University of Technology press release (also on EurekAlert and received via email),

Imagine sitting out in the sun, reading a digital screen as thin as paper, but seeing the same image quality as if you were indoors. Thanks to research from Chalmers University of Technology, Sweden, it could soon be a reality. A new type of reflective screen – sometimes described as ‘electronic paper’ – offers optimal colour display, while using ambient light to keep energy consumption to a minimum.

Traditional digital screens use a backlight to illuminate the text or images displayed upon them. This is fine indoors, but we’ve all experienced the difficulties of viewing such screens in bright sunshine. Reflective screens, however, attempt to use the ambient light, mimicking the way our eyes respond to natural paper.

“For reflective screens to compete with the energy-intensive digital screens that we use today, images and colours must be reproduced with the same high quality. That will be the real breakthrough. Our research now shows how the technology can be optimised, making it attractive for commercial use,” says Marika Gugole, Doctoral Student at the Department of Chemistry and Chemical Engineering at Chalmers University of Technology.

The researchers had already previously succeeded in developing an ultra-thin, flexible material that reproduces all the colours an LED screen can display, while requiring only a tenth of the energy that a standard tablet consumes. But in the earlier design the colours on the reflective screen did not display with optimal quality. Now the new study, published in the journal Nano Letters takes the material one step further. Using a previously researched, porous and nanostructured material, containing tungsten trioxide, gold and platinum, they tried a new tactic – inverting the design in such a way as to allow the colours to appear much more accurately on the screen.

Inverting the design for top quality colour
The inversion of the design represents a great step forward. They placed the component which makes the material electrically conductive underneath the pixelated nanostructure that reproduces the colours – instead of above it, as was previously the case. This new design means you look directly at the pixelated surface, therefore seeing the colours much more clearly.

In addition to the minimal energy consumption, reflective screens have other advantages. For example, they are much less tiring for the eyes compared to looking at a regular screen.

To make these reflective screens, certain rare metals are required – such as the gold and platinum – but because the final product is so thin, the amounts needed are very small. The researchers have high hopes that eventually, it will be possible to significantly reduce the quantities needed for production.

“Our main goal when developing these reflective screens, or ‘electronic paper’ as it is sometimes termed, is to find sustainable, energy-saving solutions. And in this case, energy consumption is almost zero because we simply use the ambient light of the surroundings,” explains research leader Andreas Dahlin, Professor at the Department of Chemistry and Chemical Engineering at Chalmers.

Flexible with a wide range of uses
Reflective screens are already available in some tablets today, but they only display the colours black and white well, which limits their use.

“A large industrial player with the right technical competence could, in principle, start developing a product with the new technology within a couple of months,” says Andreas Dahlin, who envisions a number of further applications. In addition to smart phones and tablets, it could also be useful for outdoor advertising, offering energy and resource savings compared with both printed posters or moving digital screens.

More about the research

• Research on the nano-thin electronic paper has been ongoing for several years at Chalmers, and the work has been rewarded with both international attention and major strategic research grants. 

• The technology in Chalmers researchers’ reflective screens is based on the material’s ability to regulate how light is absorbed and reflected. In the current study, tungsten trioxide is the core material, but in previous studies, researchers also used polymers. The material that covers the surface conducts electronic signals throughout the screen and can be patterned to create high-resolution images.

• The scientific article Electrochromic Inorganic Nanostructures with High Chromaticity and Superior Brightness has been published in Nano Letters and is written by Marika Gugole, Oliver Olsson, Stefano Rossi, Magnus P. Jonsson and Andreas Dahlin. The researchers are active at Chalmers University of Technology and Linköping University, Sweden.

Since the title and list of authors is included just above in a format almost identical to my usual ‘citation’, I’ll add only some publication details,

Nano Lett. 2021, 21, 10, 4343–4350 Publication Date:May 10, 2021 DOI: https://doi.org/10.1021/acs.nanolett.1c00904 Copyright © 2021 The Authors. Published by American Chemical Society

This paper appears to be open access.

‘Playing telephone’ with multivalent gold nanoparticles

A July 7, 2021 news item on phys.org describes what ‘playing telephone’ has to do with gold nanoparticles,

Cells play a precise game of telephone, sending messages to each other that trigger actions further on. With clear signaling, the cells achieve their goals. In disease, however, the signals break up and result in confused messaging and unintended consequences. To help parse out these signals and how they function in health—and go awry in disease—scientists tag proteins with labels they can follow as the proteins interact with the molecular world around them.

The challenge is figuring out which proteins to label in the first place. Now, a team led by researchers from Tokyo University of Agriculture and Technology (TUAT) has developed a new approach to identifying and tagging the specific proteins. They published their results on June 1 [2021] in Angewandte Chemie.

A July 8, 2021 TUAT press release on EurekAlert, which originated the news item, delves further into the research (I appreciate how clearly the work is explained),

“We are interested in exploring protein receptors of certain carbohydrate molecules that are involved in mediating cell signaling, particularly in cancer cells,” said paper author Kaori Sakurai, associate professor in the Department of Biotechnology and Life Science at TUAT.

The carbohydrate molecules, called ligands, are typically expressed on the surface of cells and are known to dynamically form complexes with protein receptors to coordinate complicated cellular functions. However, Sakurai said, the proteins responsible for binding the carbohydrates have been difficult to identify because they bond so weakly with the molecules.

The researchers designed a new type of carbohydrate probe that would not only link to the molecules, but tightly bind to them.

“We used gold nanoparticles as a scaffold to attach both carbohydrate ligands and electrophiles — a chemical that loves to react with other molecules — in a multivalent fashion,” Sakurai said. “This way, we were able to greatly increase binding affinity and reaction efficiency toward carbohydrate-binding proteins.”

The researchers applied the designed probes to cell lysate, a fluid containing the innards of broken-apart cells.

“The probes quickly found the target carbohydrate-binding proteins, triggering the electrophilic groups to react with electron-donating amino acid residues on nearby proteins,” Sakurai said. “This resulted in proteins firmly cross-linked to the gold nanoparticles’ surface, making it easy to subsequently analyze their identities.”

The team evaluated several electrophilic groups to identify the most efficient type for labeling their target proteins.

“We found that a particular electrophilic group called aryl sulfonyl fluoride is best suited for affinity labeling of carbohydrate-binding proteins,” said co-author Nanako Suto, a graduate student in the Department of Biotechnology and Life Science of TUAT. “However, they have rarely been used to identify target proteins, presumably because they would non-selectively react with various other, undesired proteins.”

However, the scale of aryl sulfonyl fluoride use appears to mitigate the issue.

“The non-selectivity isn’t a problem if aryl sulfonyl fluoride is used at very low concentrations, at the range of the nanoscale,” said co-author Shione Kamoshita, also a graduate student in the Department of Biotechnology and Life Science, TUAT.

The gold nanoparticle scaffolding displays many copies of the electrophilic group, which keeps aryl sulfonyl fluoride’s local concentration high on the nanoparticle surface but restrains them from the general cell system and reacting to undesired proteins. With the high concentration at the nano-level, some copies of electrophilic groups can efficiently react with target proteins.

“Through this process, we were able to achieve highly efficient and selective affinity labeling of carbohydrate-binding proteins in cell lysate,” Sakurai said. “We will apply the new method in target identification of several cancer-related carbohydrate ligands and investigate their function in cancer development. In parallel, we aim to explore the general utility of this new probe design for various other bioactive small molecules, so that we can accelerate the elucidation of their mechanisms.”

Here’s a link to and a citation for the paper,

Exploration of the Reactivity of Multivalent Electrophiles for Affinity Labeling: Sulfonyl Fluoride as a Highly Efficient and Selective Label by Nanako Suto, Shione Kamoshita, Dr. Shoichi Hosoya, Prof. Kaori Sakurai. Angewandte Chemie Volume 60, Issue 31 July 26, 2021 Pages 17080-17087 DOI: https://doi.org/10.1002/anie.202104347 First published: 01 June 2021

This paper is behind a paywall.

True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)

The Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, which has been broadcast since November 1960, explored the world of emotional, empathic and creative artificial intelligence (AI) in a Friday, November 19, 2021 telecast titled, The Machine That Feels,

The Machine That Feels explores how artificial intelligence (AI) is catching up to us in ways once thought to be uniquely human: empathy, emotional intelligence and creativity.

As AI moves closer to replicating humans, it has the potential to reshape every aspect of our world – but most of us are unaware of what looms on the horizon.

Scientists see AI technology as an opportunity to address inequities and make a better, more connected world. But it also has the capacity to do the opposite: to stoke division and inequality and disconnect us from fellow humans. The Machine That Feels, from The Nature of Things, shows viewers what they need to know about a field that is advancing at a dizzying pace, often away from the public eye.

What does it mean when AI makes art? Can AI interpret and understand human emotions? How is it possible that AI creates sophisticated neural networks that mimic the human brain? The Machine That Feels investigates these questions, and more.

In Vienna, composer Walter Werzowa has — with the help of AI — completed Beethoven’s previously unfinished 10th symphony. By feeding data about Beethoven, his music, his style and the original scribbles on the 10th symphony into an algorithm, AI has created an entirely new piece of art.

In Atlanta, Dr. Ayanna Howard and her robotics lab at Georgia Tech are teaching robots how to interpret human emotions. Where others see problems, Howard sees opportunity: how AI can help fill gaps in education and health care systems. She believes we need a fundamental shift in how we perceive robots: let’s get humans and robots to work together to help others.

At Tufts University in Boston, a new type of biological robot has been created: the xenobot. The size of a grain of sand, xenobots are grown from frog heart and skin cells, and combined with the “mind” of a computer. Programmed with a specific task, they can move together to complete it. In the future, they could be used for environmental cleanup, digesting microplastics and targeted drug delivery (like releasing chemotherapy compounds directly into tumours).

The film includes interviews with global leaders, commentators and innovators from the AI field, including Geoff Hinton, Yoshua Bengio, Ray Kurzweil and Douglas Coupland, who highlight some of the innovative and cutting-edge AI technologies that are changing our world.

The Machine That Feels focuses on one central question: in the flourishing age of artificial intelligence, what does it mean to be human?

I’ll get back to that last bit, “… what does it mean to be human?” later.

There’s a lot to appreciate in this 44 min. programme. As you’d expect, there was a significant chunk of time devoted to research being done in the US but Poland and Japan also featured and Canadian content was substantive. A number of tricky topics were covered and transitions from one topic to the next were smooth.

In the end credits, I counted over 40 source materials from Getty Images, Google Canada, Gatebox, amongst others. It would have been interesting to find out which segments were produced by CBC.

David Suzuki’s (programme host) script was well written and his narration was enjoyable, engaging, and non-intrusive. That last quality is not always true of CBC hosts who can fall into the trap of overdramatizing the text.

Drilling down

I have followed artificial intelligence stories in a passive way (i.e., I don’t seek them out) for many years. Even so, there was a lot of material in the programme that was new to me.

For example, there was this love story (from the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage on the CBC),

In the The Machine That Feels, a documentary from The Nature of Things, we meet Kondo Akihiko, a Tokyo resident who “married” a hologram of virtual pop singer Hatsune Miku using a certificate issued by Gatebox (the marriage isn’t recognized by the state, and Gatebox acknowledges the union goes “beyond dimensions”).

I found Akihiko to be quite moving when he described his relationship, which is not unique. It seems some 4,000 men have ‘wed’ their digital companions, you can read about that and more on the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage.

What does it mean to be human?

Overall, this Nature of Things episode embraces certainty, which means the question of what it means to human is referenced rather than seriously discussed. An unanswerable philosophical question, the programme is ill-equipped to address it, especially since none of the commentators are philosophers or seem inclined to philosophize.

The programme presents AI as a juggernaut. Briefly mentioned is the notion that we need to make some decisions about how our juggernaut is developed and utilized. No one discusses how we go about making changes to systems that are already making critical decisions for us. (For more about AI and decision-making, see my February 28, 2017 posting and scroll down to the ‘Algorithms and big data’ subhead for Cathy O’Neil’s description of how important decisions that affect us are being made by AI systems. She is the author of the 2016 book, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’; still a timely read.)

In fact, the programme’s tone is mostly one of breathless excitement. A few misgivings are expressed, e.g,, one woman who has an artificial ‘texting friend’ (Replika; a chatbot app) noted that it can ‘get into your head’ when she had a chat where her ‘friend’ told her that all of a woman’s worth is based on her body; she pushed back but intimated that someone more vulnerable could find that messaging difficult to deal with.

The sequence featuring Akihiko and his hologram ‘wife’ is followed by one suggesting that people might become more isolated and emotionally stunted as they interact with artificial friends. It should be noted, Akihiko’s wife is described as ‘perfect’. I gather perfection means that you are always understanding and have no needs of your own. She also seems to be about 18″ high.

Akihiko has obviously been asked about his ‘wife’ before as his answers are ready. They boil down to “there are many types of relationships” and there’s nothing wrong with that. It’s an intriguing thought which is not explored.

Also unexplored, these relationships could be said to resemble slavery. After all, you pay for these friends over which you have control. But perhaps that’s alright since AI friends don’t have consciousness. Or do they? In addition to not being able to answer the question, “what is it to be human?” we still can’t answer the question, “what is consciousness?”

AI and creativity

The Nature of Things team works fast. ‘Beethoven X – The AI Project’ had its first performance on October 9, 2021. (See my October 1, 2021 post ‘Finishing Beethoven’s unfinished 10th Symphony’ for more information from Ahmed Elgammal’s (Director of the Art & AI Lab at Rutgers University) technical perspective on the project.

Briefly, Beethoven died before completing his 10th symphony and a number of computer scientists, musicologists, AI, and musicians collaborated to finish the symphony.)

The one listener (Felix Mayer, music professor at the Technical University Munich) in the hall during a performance doesn’t consider the work to be a piece of music. He does have a point. Beethoven left some notes but this ’10th’ is at least partly mathematical guesswork. A set of probabilities where an algorithm chooses which note comes next based on probability.

There was another artist also represented in the programme. Puzzlingly, it was the still living Douglas Coupland. In my opinion, he’s better known as a visual artist than a writer (his Wikipedia entry lists him as a novelist first) but he has succeeded greatly in both fields.

What makes his inclusion in the Nature of Things ‘The Machine That Feels’ programme puzzling, is that it’s not clear how he worked with artificial intelligence in a collaborative fashion. Here’s a description of Coupland’s ‘AI’ project from a June 29, 2021 posting by Chris Henry on the Google Outreach blog (Note: Links have been removed),

… when the opportunity presented itself to explore how artificial intelligence (AI) inspires artistic expression — with the help of internationally renowned Canadian artist Douglas Coupland — the Google Research team jumped on it. This collaboration, with the support of Google Arts & Culture, culminated in a project called Slogans for the Class of 2030, which spotlights the experiences of the first generation of young people whose lives are fully intertwined with the existence of AI. 

This collaboration was brought to life by first introducing Coupland’s written work to a machine learning language model. Machine learning is a form of AI that provides computer systems the ability to automatically learn from data. In this case, Google research scientists tuned a machine learning algorithm with Coupland’s 30-year body of written work — more than a million words — so it would familiarize itself with the author’s unique style of writing. From there, curated general-public social media posts on selected topics were added to teach the algorithm how to craft short-form, topical statements. [emphases mine]

Once the algorithm was trained, the next step was to process and reassemble suggestions of text for Coupland to use as inspiration to create twenty-five Slogans for the Class of 2030. [emphasis mine]

I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. “And from these outputs, the statements you see here in this project appeared like gems. Did I write them? Yes. No. Could they have existed without me? No.” [emphases mine]

So, the algorithms crunched through Coupland’s word and social media texts to produce slogans, which Coupland then ‘combed through’ to pick out 25 slogans for the ‘Slogans For The Class of 2030’ project. (Note: In the programme, he says that he started a sentence and then the AI system completed that sentence with material gleaned from his own writings, which brings to Exquisite Corpse, a collaborative game for writers originated by the Surrealists, possibly as early as 1918.)

The ‘slogans’ project also reminds me of William S. Burroughs and the cut-up technique used in his work. From the William S. Burroughs Cut-up technique webpage on the Language is a Virus website (Thank you to Lake Rain Vajra for a very interesting website),

The cutup is a mechanical method of juxtaposition in which Burroughs literally cuts up passages of prose by himself and other writers and then pastes them back together at random. This literary version of the collage technique is also supplemented by literary use of other media. Burroughs transcribes taped cutups (several tapes spliced into each other), film cutups (montage), and mixed media experiments (results of combining tapes with television, movies, or actual events). Thus Burroughs’s use of cutups develops his juxtaposition technique to its logical conclusion as an experimental prose method, and he also makes use of all contemporary media, expanding his use of popular culture.

[Burroughs says] “All writing is in fact cut-ups. A collage of words read heard overheard. What else? Use of scissors renders the process explicit and subject to extension and variation. Clear classical prose can be composed entirely of rearranged cut-ups. Cutting and rearranging a page of written words introduces a new dimension into writing enabling the writer to turn images in cinematic variation. Images shift sense under the scissors smell images to sound sight to sound to kinesthetic. This is where Rimbaud was going with his color of vowels. And his “systematic derangement of the senses.” The place of mescaline hallucination: seeing colors tasting sounds smelling forms.

“The cut-ups can be applied to other fields than writing. Dr Neumann [emphasis mine] in his Theory of Games and Economic behavior introduces the cut-up method of random action into game and military strategy: assume that the worst has happened and act accordingly. … The cut-up method could be used to advantage in processing scientific data. [emphasis mine] How many discoveries have been made by accident? We cannot produce accidents to order. The cut-ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second-rate product when you can have the best. And the best is there for all. Poetry is for everyone . . .”

First, John von Neumann (1902 – 57) is a very important figure in the history of computing. From a February 25, 2017 John von Neumann and Modern Computer Architecture essay on the ncLab website, “… he invented the computer architecture that we use today.”

Here’s Burroughs on the history of writers and cutups (thank you to QUEDEAR for posting this clip),

You can hear Burroughs talk about the technique and how he started using it in 1959.

There is no explanation from Coupland as to how his project differs substantively from Burroughs’ cut-ups or a session of Exquisite Corpse. The use of a computer programme to crunch through data and give output doesn’t seem all that exciting. *(More about computers and chatbots at end of posting).* It’s hard to know if this was an interview situation where he wasn’t asked the question or if the editors decided against including it.

Kazuo Ishiguro?

Given that Ishiguro’s 2021 book (Klara and the Sun) is focused on an artificial friend and raises the question of ‘what does it mean to be human’, as well as the related question, ‘what is the nature of consciousness’, it would have been interesting to hear from him. He spent a fair amount of time looking into research on machine learning in preparation for his book. Maybe he was too busy?

AI and emotions

The work being done by Georgia Tech’s Dr. Ayanna Howard and her robotics lab is fascinating. They are teaching robots how to interpret human emotions. The segment which features researchers teaching and interacting with robots, Pepper and Salt, also touches on AI and bias.

Watching two African American researchers talk about the ways in which AI is unable to read emotions on ‘black’ faces as accurately as ‘white’ faces is quite compelling. It also reinforces the uneasiness you might feel after the ‘Replika’ segment where an artificial friend informs a woman that her only worth is her body.

(Interestingly, Pepper and Salt are produced by Softbank Robotics, part of Softbank, a multinational Japanese conglomerate, [see a June 28, 2021 article by Ian Carlos Campbell for The Verge] whose entire management team is male according to their About page.)

While Howard is very hopeful about the possibilities of a machine that can read emotions, she doesn’t explore (on camera) any means for pushing back against bias other than training AI by using more black faces to help them learn. Perhaps more representative management and coding teams in technology companies?

While the programme largely focused on AI as an algorithm on a computer, robots can be enabled by AI (as can be seen in the segment with Dr. Howard).

My February 14, 2019 posting features research with a completely different approach to emotions and machines,

“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

[from a July 16, 2018 Cornell University news release on EurekAlert]

This brings the question back to, what is consciousness?

What scientists aren’t taught

Dr. Howard notes that scientists are not taught to consider the implications of their work. Her comment reminded me of a question I was asked many years ago after a presentation, it concerned whether or not science had any morality. (I said, no.)

My reply angered an audience member (a visual artist who was working with scientists at the time) as she took it personally and started defending scientists as good people who care and have morals and values. She failed to understand that the way in which we teach science conforms to a notion that somewhere there are scientific facts which are neutral and objective. Society and its values are irrelevant in the face of the larger ‘scientific truth’ and, as a consequence, you don’t need to teach or discuss how your values or morals affect that truth or what the social implications of your work might be.

Science is practiced without much if any thought to values. By contrast, there is the medical injunction, “Do no harm,” which suggests to me that someone recognized competing values. E.g., If your important and worthwhile research is harming people, you should ‘do no harm’.

The experts, the connections, and the Canadian content

It’s been a while since I’ve seen Ray Kurzweil mentioned but he seems to be getting more attention these days. (See this November 16, 2021 posting by Jonny Thomson titled, “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” on The Big Think for more). Note: I will have a little more about evolution later in this post.

Interestingly, Kurzweil is employed by Google these days (see his Wikipedia entry, the column to the right). So is Geoffrey Hinton, another one of the experts in the programme (see Hinton’s Wikipedia entry, the column to the right, under Institutions).

I’m not sure about Yoshu Bengio’s relationship with Google but he’s a professor at the Université de Montréal, and he’s the Scientific Director for Mila ((Quebec’s Artificial Intelligence research institute)) & IVADO (Institut de valorisation des données), Note: IVADO is not particularly relevant to what’s being discussed in this post.

As for Mila, the Canada Google blog in a November 21, 2016 posting notes a $4.5M grant to the institution,

Google invests $4.5 Million in Montreal AI Research

A new grant from Google for the Montreal Institute for Learning Algorithms (MILA) will fund seven faculty across a number of Montreal institutions and will help tackle some of the biggest challenges in machine learning and AI, including applications in the realm of systems that can understand and generate natural language. In other words, better understand a fan’s enthusiasm for Les Canadien [sic].

Google is expanding its academic support of deep learning at MILA, renewing Yoshua Bengio’s Focused Research Award and offering Focused Research Awards to MILA faculty at University of Montreal and McGill University:

Google reaffirmed their commitment to Mila in 2020 with a grant worth almost $4M (from a November 13, 2020 posting on the Mila website, Note: A link has been removed),

Google Canada announced today [November 13, 2020] that it will be renewing its funding of Mila – Quebec Artificial Intelligence Institute, with a generous pledge of nearly $4M over a three-year period. Google previously invested $4.5M US in 2016, enabling Mila to grow from 25 to 519 researchers.

In a piece written for Google’s Official Canada Blog, Yoshua Bengio, Mila Scientific Director, says that this year marked a “watershed moment for the Canadian AI community,” as the COVID-19 pandemic created unprecedented challenges that demanded rapid innovation and increased interdisciplinary collaboration between researchers in Canada and around the world.

COVID-19 has changed the world forever and many industries, from healthcare to retail, will need to adapt to thrive in our ‘new normal.’ As we look to the future and how priorities will shift, it is clear that AI is no longer an emerging technology but a useful tool that can serve to solve world problems. Google Canada recognizes not only this opportunity but the important task at hand and I’m thrilled they have reconfirmed their support of Mila with an additional $3,95 million funding grant until 22.

– Yoshua Bengio, for Google’s Official Canada Blog

Interesting, eh? Of course, Douglas Coupland is working with Google, presumably for money, and that would connect over 50% of the Canadian content (Douglas Coupland, Yoshua Bengio, and Geoffrey Hinton; Kurzweil is an American) in the programme to Google.

My hat’s off to Google’s marketing communications and public relations teams.

Anthony Morgan of Science Everywhere also provided some Canadian content. His LinkedIn profile indicates that he’s working on a PhD in molecular science, which is described this way, “My work explores the characteristics of learning environments, that support critical thinking and the relationship between critical thinking and wisdom.”

Morgan is also the founder and creative director of Science Everywhere, from his LinkedIn profile, “An events & media company supporting knowledge mobilization, community engagement, entrepreneurship and critical thinking. We build social tools for better thinking.”

There is this from his LinkedIn profile,

I develop, create and host engaging live experiences & media to foster critical thinking.

I’ve spent my 15+ years studying and working in psychology and science communication, thinking deeply about the most common individual and societal barriers to critical thinking. As an entrepreneur, I lead a team to create, develop and deploy cultural tools designed to address those barriers. As a researcher I study what we can do to reduce polarization around science.

There’s a lot more to Morgan (do look him up; he has connections to the CBC and other media outlets). The difficulty is: why was he chosen to talk about artificial intelligence and emotions and creativity when he doesn’t seem to know much about the topic? He does mention GPT-3, an AI programming language. He seems to be acting as an advocate for AI although he offers this bit of almost cautionary wisdom, “… algorithms are sets of instructions.” (You can can find out more about it in my April 27, 2021 posting. There’s also this November 26, 2021 posting [The Inherent Limitations of GPT-3] by Andrey Kurenkov, a PhD student with the Stanford [University] Vision and Learning Lab.)

Most of the cautionary commentary comes from Luke Stark, assistant professor at Western [Ontario] University’s Faculty of Information and Media Studies. He’s the one who mentions stunted emotional growth.

Before moving on, there is another set of connections through the Pan-Canadian Artificial Intelligence Strategy, a Canadian government science funding initiative announced in the 2017 federal budget. The funds allocated to the strategy are administered by the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio through Mila is associated with the strategy and CIFAR, as is Geoffrey Hinton through his position as Chief Scientific Advisor for the Vector Institute.

Evolution

Getting back to “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” Xenobots point in a disconcerting (for some of us) evolutionary direction.

I featured the work, which is being done at Tufts University in the US, in my June 21, 2021 posting, which includes an embedded video,

From a March 31, 2021 news item on ScienceDaily,

Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.

Get ready for Xenobots 2.0.

Also from an excerpt in the posting, the team has “created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory.”

Memory is key to intelligence and this work introduces the notion of ‘living’ robots which leads to questioning what constitutes life. ‘The Machine That Feels’ is already grappling with far too many questions to address this development but introducing the research here might have laid the groundwork for the next episode, The New Human, telecast on November 26, 2021,

While no one can be certain what will happen, evolutionary biologists and statisticians are observing trends that could mean our future feet only have four toes (so long, pinky toe) or our faces may have new combinations of features. The new humans might be much taller than their parents or grandparents, or have darker hair and eyes.

And while evolution takes a lot of time, we might not have to wait too long for a new version of ourselves.

Technology is redesigning the way we look and function — at a much faster pace than evolution. We are merging with technology more than ever before: our bodies may now have implanted chips, smart limbs, exoskeletons and 3D-printed organs. A revolutionary gene editing technique has given us the power to take evolution into our own hands and alter our own DNA. How long will it be before we are designing our children?

As the story about the xenobots doesn’t say, we could also take the evolution of another species into our hands.

David Suzuki, where are you?

Our programme host, David Suzuki surprised me. I thought that as an environmentalist he’d point out that the huge amounts of computing power needed for artificial intelligence as mentioned in the programme, constitutes an environmental issue. I also would have expected a geneticist like Suzuki might have some concerns with regard to xenobots but perhaps that’s being saved for the next episode (The New Human) of the Nature of Things.

Artificial stupidity

Thanks to Will Knight for introducing me to the term ‘artificial stupidity’. Knight, a senior writer covers artificial intelligence for WIRED magazine. According to its Wikipedia entry,

Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks.[1] However, within the field of computer science, artificial stupidity is also used to refer to a technique of “dumbing down” computer programs in order to deliberately introduce errors in their responses.

Knight was using the term in its humorous, derogatory form.

Finally

The episode certainly got me thinking if not quite in the way producers might have hoped. ‘The Machine That Feels’ is a glossy, pretty well researched piece of infotainment.

To be blunt, I like and have no problems with infotainment but it can be seductive. I found it easier to remember the artificial friends, wife, xenobots, and symphony than the critiques and concerns.

Hopefully, ‘The Machine That Feels’ stimulates more interest in some very important topics. If you missed the telecast, you can catch the episode here.

For anyone curious about predictive policing, which was mentioned in the Ayanna Howard segment, see my November 23, 2017 posting about Vancouver’s plunge into AI and car theft.

*ETA December 6, 2021: One of the first ‘chatterbots’ was ELIZA, a computer programme developed from1964 to 1966. The most famous ELIZA script was DOCTOR, where the programme simulated a therapist. Many early users believed ELIZA understood and could respond as a human would despite Joseph Weizenbaum’s (creator of the programme) insistence otherwise.

Lost Women of Science

Both an organization and a podcast series, Lost Women of Science is preparing for its second, third, and fourth podcasts seasons thanks to a grant announced in a November 19, 2021 Lost Women of Science news release (on Cision),

 Journalist and author Katie Hafner, and bioethicist Amy Scharf, today announced that the Lost Women of Science podcast series will continue for an additional three seasons thanks to a grant award of $446,760 from the Gordon and Betty Moore Foundation. The podcast series will continue its partnership with public media organization PRX and the award-winning Scientific American magazine.

The first season features multiple in-depth episodes centered on Dr. Dorothy Andersen, a pediatric pathologist who identified and named cystic fibrosis in 1938. Three episodes are now available across all major podcast listening platforms, including Apple Podcasts, Google Podcasts, Spotify, Stitcher, and Amazon Music. The fourth episode [I believe it’s Season 1] will be released on Thanksgiving Day [November 25, 2021].

Genny Biggs, Special Projects Officer of the Gordon and Betty Moore Foundation said, “We have been excited about this project from our initial conversations and have been pleased to see the results. Our history books have unfortunately taught us too little about these women and we support bringing their stories to the forefront. We hope they will inspire the next generation of female scientists.”

Hafner said, “The response to the podcast so far has been overwhelmingly positive.  We could not be more grateful to the Gordon and Betty Moore Foundation, not only for early funding to help us get started, but for continued support and confidence that will allow us to tell more stories.”

Dr. Maria Klawe, President of Harvey Mudd College and Chair of the Lost Women of Science Initiative Advisory Board, said, “It’s wonderful that the Gordon and Betty Moore Foundation recognizes that women have been making great contributions to science for centuries, even though they’re often not recognized. And the rich storytelling approach has deep impact in helping people understand the importance of a scientist’s work.”

Earlier funding for Lost Women of Science has come from the Gordon and Betty Moore Foundation, Schmidt Futures and the John Templeton Foundation. The Initiative is also partnering with Barnard College at Columbia University, one-third of whose graduates are STEM majors. Harvey Mudd College graciously served as an early Fiscal Sponsor.

To learn more about the Lost Women of Science Initiative, or to donate to this important work, please visit: www.lostwomenofscience.org and follow @lostwomenofsci.

About Lost Women of Science:

The Lost Women of Science Initiativeis a 501(c)3 nonprofit with two overarching and interrelated missions: to tell the story of female scientists who made groundbreaking achievements in their fields, yet remain largely unknown to the general public, and to inspire girls and young women to pursue education and careers in STEM. The Initiative’s flagship is its Lost Women of Science podcast series. As a full, mission-driven organization, the Lost Women of Science Initiative plans to digitize and archive its research, and to make all primary source material available to students and historians of science.

About the Gordon and Betty Moore Foundation:

The Gordon and Betty Moore Foundation fosters path-breaking scientific discovery, environmental conservation, patient care improvements and preservation of the special character of the Bay Area. Visit Moore.org and follow @MooreFound.

You can listen to this trailer for Season 1,

The four episodes currently available constitute a four-part series on Dorothy Andersen, her work, and how she got ‘lost’. You can find the podcasts here.

Thank you to the publicist who sent the announcement about the grant!

US President’s Council of Advisors on Science and Technology (PCAST) meeting on Biomanufacturing, the Federal Science and Technology Workforce, and the National Nanotechnology Initiative

It’s been years since I’ve featured a PCAST meeting here; I just don’t stumble across the notices all that often anymore.

Unfortunately, I got there late this time, It’s especially unfortunate as the meeting was on “Biomanufacturing, the Federal Science and Technology Workforce, and the National Nanotechnology Initiative.” Held on November 29, 2021, it was livestreamed. Happily, there’s already a video of the meeting (a little over 4.5 hours long) on YouTube.

If you go to the White House PCAST Meetings webpage, you’ll find, after scrolling down about 40% of the way, ‘Past Meetings’, which in addition to the past meetings includes agendas, lists of guests and their biographies and more. Given the title of the meeting and the invitees, this looks like it will have a focus on the business of biotechnology and nanotechnology. This hearkens back to when former President Barack Obama pushed for nanotechnology manufacturing taking the science out of the laboratories and commercializing it.

Here’s part of the agenda for the November 29, 2021 meeting (I’m particularly interested in the third session; apologies for the formatting),

President’s Council of Advisors on Science and Technology

Public Meeting Agenda
November 29, 2021
Virtual
(All times Eastern)

12:15 pm Welcome

PCAST Co-Chairs: Frances Arnold, Eric Lander, Maria Zuber


3:45 pm Session 3: Overview of the National Nanotechnology Initiative

Moderator: Eric Lander

Speaker: Lisa Friedersdorf, National Nanotechnology Coordination Office

The biographies for the speakers can be found here. (I’m glad to see that President Joe Biden has revitalized the council.)

For anyone unfamiliar with PCAST, it has an interesting history (from the President’s Council of Advisors on Science and Technology webpage),

Beginning in 1933 with President Franklin D. Roosevelt’s Science Advisory Board, each President has established an advisory committee of scientists, engineers, and health professionals. Although the name of the advisory board has changed over the years, the purpose has remained the same—to provide scientific and technical advice to the President of the United States.

Drawing from the nation’s most talented and accomplished individuals, President Biden’s PCAST consists of 30 members, including 20 elected members of the National Academies of Sciences, Engineering and Medicine, five MacArthur “Genius” Fellows, two former Cabinet secretaries, and two Nobel laureates. Its members include experts in astrophysics and agriculture, biochemistry and computer engineering, ecology and entrepreneurship, immunology and nanotechnology, neuroscience and national security, social science and cybersecurity, and more

Enjoy!