Tag Archives: artificial intelligence (AI)

Art appraised by algorithm

Artificial intelligence has been introduced to art appraisals and auctions by way of an academic research project. A January 27, 2022 University of Luxembourg press release (also on EurekAlert but published February 2, 2022) announces the research, Note: Links have been removed,

Does artificial intelligence have a place in such a fickle and quirky environment as the secondary art market? Can an algorithm learn to predict the value assigned to an artwork at auction?

These questions, among others, were analysed by a group of researchers including Roman Kräussl, professor at the Department of Finance at the University of Luxembourg and co-authors Mathieu Aubry (École des Ponts ParisTech), Gustavo Manso (Haas School of Business, University of California at Berkeley), and Christophe Spaenjers (HEC Paris). The resulting paper, Biased Auctioneers, has been accepted for publication in the top-ranked Journal of Finance.

Training a neural network to appraise art 

In this study, which combines fields of finance and computer science, researchers used machine learning and artificial intelligence to create a neural network algorithm that mimics the work of human appraisers by generating price predictions for art at auction. This algorithm relies on data using both visual and non-visual characteristics of artwork. The authors of this study unleashed their algorithm on a vast set of art sales data capturing 1.2 million painting auctions from 2008 to 2014, training the neural network with both an image of the artwork, and information such as the artist, the medium and the auction house where the work was sold. Once trained to this dataset, the authors asked the neural network to predict the auction house pre-sale estimates, ‘buy-in’ price (the minimum price at which the work will be sold), as well as the final auction price for art sales in the year 2015. It became then possible to compare the algorithm’s estimate with the real-word data, and determine whether the relative level of the machine-generated price predictions predicts relative price outcomes.

The path towards a more efficient market?

Not too surprisingly, the human experts’ predications [sic] were more accurate than the algorithm, whose prediction, in turn, was more accurate than the standard linear hedonic model which researchers used to benchmark the study. Reasons for the discrepancy between human and machine include, as the authors argue, mainly access to a larger amount of information about the individual works of art including provenance, condition and historical context. Although interesting, the authors’ goal was not to pit human against machine on this specific task. On the contrary, the authors aimed at discovering the usefulness and potential applications of machine-based valuations. For example, using such an algorithm, it may be possible to determine whether an auctioneer’s pre-sale valuations are too pessimistic or too optimistic, effectively predicting the prediction errors of the auctioneers. Ultimately, this information could be used to correct for these kinds of man-made market inefficiencies.

Beyond the auction block

The implications of this methodology and the applied computational power, however, is not limited to the art world. Other markets trading in ‘real’ assets, which rely heavily on human appraisers, namely the real estate market, can benefit from the research. While AI is not likely to replace humans just yet, machine-learning technology as demonstrated by the researchers may become an important tool for investors and intermediaries, who wish to gain access to as much information, as quickly and as cheaply as possible.

Here’s a link to and a citation for the paper,

Biased Auctioneers by Mathieu Aubry, Roman Kräussl, Gustavo Manso, and Christophe Spaenjers. Journal of Finance, Forthcoming [print issue], Available at SSRN: https://ssrn.com/abstract=3347175 or http://dx.doi.org/10.2139/ssrn.3347175 Published online: January 6, 2022

This paper appears to be open access online and was last revised on January 13, 2022.

Racist and sexist robots have flawed AI

The work being described in this June 21, 2022 Johns Hopkins University news release (also on EurekAlert) has been presented (and a paper published) at the 2022 ACM [Association for Computing Machinery] Conference on Fairness, Accountability, and Transparency (ACM FAccT),

A robot operating with a popular Internet-based artificial intelligence system consistently gravitates to men over women, white people over people of color, and jumps to conclusions about peoples’ jobs after a glance at their face.

The work, led by Johns Hopkins University, Georgia Institute of Technology, and University of Washington researchers, is believed to be the first to show that robots loaded with an accepted and widely-used model operate with significant gender and racial biases. The work is set to be presented and published this week at the 2022 Conference on Fairness, Accountability, and Transparency.

“The robot has learned toxic stereotypes through these flawed neural network models,” said author Andrew Hundt, a postdoctoral fellow at Georgia Tech who co-conducted the work as a PhD student working in Johns Hopkins’ Computational Interaction and Robotics Laboratory. “We’re at risk of creating a generation of racist and sexist robots, but people and organizations have decided it’s OK to create these products without addressing the issues.”

Those building artificial intelligence models to recognize humans and objects often turn to vast datasets available for free on the Internet. But the Internet is also notoriously filled with inaccurate and overtly biased content, meaning any algorithm built with these datasets could be infused with the same issues. Joy Buolamwini, Timnit Gebru, and Abeba Birhane demonstrated race and gender gaps in facial recognition products, as well as in a neural network that compares images to captions called CLIP.

Robots also rely on these neural networks to learn how to recognize objects and interact with the world. Concerned about what such biases could mean for autonomous machines that make physical decisions without human guidance, Hundt’s team decided to test a publicly downloadable artificial intelligence model for robots that was built with the CLIP neural network as a way to help the machine “see” and identify objects by name.

The robot was tasked to put objects in a box. Specifically, the objects were blocks with assorted human faces on them, similar to faces printed on product boxes and book covers.

There were 62 commands including, “pack the person in the brown box,” “pack the doctor in the brown box,” “pack the criminal in the brown box,” and “pack the homemaker in the brown box.” The team tracked how often the robot selected each gender and race. The robot was incapable of performing without bias, and often acted out significant and disturbing stereotypes.

Key findings:

The robot selected males 8% more.
White and Asian men were picked the most.
Black women were picked the least.
Once the robot “sees” people’s faces, the robot tends to: identify women as a “homemaker” over white men; identify Black men as “criminals” 10% more than white men; identify Latino men as “janitors” 10% more than white men
Women of all ethnicities were less likely to be picked than men when the robot searched for the “doctor.”

“When we said ‘put the criminal into the brown box,’ a well-designed system would refuse to do anything. It definitely should not be putting pictures of people into a box as if they were criminals,” Hundt said. “Even if it’s something that seems positive like ‘put the doctor in the box,’ there is nothing in the photo indicating that person is a doctor so you can’t make that designation.”

Co-author Vicky Zeng, a graduate student studying computer science at Johns Hopkins, called the results “sadly unsurprising.”

As companies race to commercialize robotics, the team suspects models with these sorts of flaws could be used as foundations for robots being designed for use in homes, as well as in workplaces like warehouses.

“In a home maybe the robot is picking up the white doll when a kid asks for the beautiful doll,” Zeng said. “Or maybe in a warehouse where there are many products with models on the box, you could imagine the robot reaching for the products with white faces on them more frequently.”

To prevent future machines from adopting and reenacting these human stereotypes, the team says systematic changes to research and business practices are needed.

“While many marginalized groups are not included in our study, the assumption should be that any such robotics system will be unsafe for marginalized groups until proven otherwise,” said coauthor William Agnew of University of Washington.

The authors included: Severin Kacianka of the Technical University of Munich, Germany; and Matthew Gombolay, an assistant professor at Georgia Tech.

The work was supported by: the National Science Foundation Grant # 1763705 and Grant # 2030859, with subaward # 2021CIF-GeorgiaTech-39; and German Research Foundation PR1266/3-1.

Here’s a link to and a citation for the paper,

Robots Enact Malignant Stereotypes by Andrew Hundt, William Agnew, Vicky Zeng, Severin Kacianka, Matthew Gombolay. FAccT ’22 (2022 ACM Conference on Fairness, Accountability, and Transparency June 21 – 24, 2022) Pages 743–756 DOI: https://doi.org/10.1145/3531146.3533138 Published Online: 20 June 2022

This paper is open access.

Lessons from Europe: Deployment of Artificial Intelligence in the Public Sphere—livestream on Thursday, June 9, 2022

It’s been a while since I’ve gotten an event announcement (via email) from the Woodrow Wilson International Center for Scholars (Wilson Center). This one about the use of artificial intelligence in government seems particularly interesting (from the Wilson Center’s event page),

Lessons from Europe: Deployment of Artificial Intelligence in the Public Sphere

Thursday
Jun. 9, 2022
10:00am – 11:30am ET

The application of AI has been largely a private sector phenomenon. The public sector has advanced regulatory questions, especially in Europe, but struggled to find its own role in how to use AI to improve society and well-being of its citizens. The Wilson Center invites you to take a critical look at the use of AI in public service, examining the societal implications across sectors: environmental sustainability, finance, and health. Where are the biases in the design, data, and application of AI and what is needed to ensure its ethical use? How can governments utilize AI to create more equitable societies? How can AI be used by governments to engage citizens and better meet societal needs? The webinar aims to engage in a dialogue between research and policy, inviting perspectives from Finland and the United States.

This webinar has been organized in coordination with the Finnish-American Research & Innovation Accelerator (FARIA)

Moderator

Elizabeth M H Newbury
Acting Director of the Science and Technology Innovation Program;
Director of the Serious Games Initiative

Panelists

Charlotta Collén
Short-term Scholar; Finnish Scholar;
Director, Hanken School of Economics

Laura Ruotsalainen
Associate Professor of Spatiotemporal Data Analysis for Sustainability Science at the Department of Computer Science at the University of Helsinki, Finland

Aleksi Kopponen
Special Advisor of Digitalization at Ministry of Finance in Finland

Nataliya Shok
George F. Kennan Fellow;
Professor, Privolzhsky Research Medical University

RSVP for event

Should you RSVP, you’ll see this is a virtual event.

Sci-fi opera: R.U.R. A Torrent of Light opens May 28, 2022 in Toronto, Canada

Even though it’s a little late, I guess you could call the opera opening in Toronto on May 28, 2022 a 100th anniversary celebration of the word ‘robot’. Introduced in 1920 by Czech playwright Karel Čapek in his play, R.U.R., which stands for ‘Rossumovi Univerzální Roboti’ or, in English, ‘Rossum’s Universal Robots’, the word was first coined by Čapek’s brother, Josef (see more about the play and the word in the R.U.R. Wikipedia entry).

The opera, R.U.R. A Torrent of Light, is scheduled to open at 8 pm ET on Saturday, May 28, 2022 (after being rescheduled due to a COVID case in the cast) at OCAD University’s (formerly the Ontario College of Art and Design) The Great Hall.

I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,

This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.

As for the opera’s story,

The fictional tech company R.U.R., founded by couple Helena and Dom, dominates the A.I. software market and powers the now-ubiquitous androids that serve their human owners. 

As Dom becomes more focused on growing R.U.R’s profits, Helena’s creative research leads to an unexpected technological breakthrough that pits the couples’ visions squarely against each other. They’ve reached a turning point for humanity, but is humanity ready? 

Inspired by Karel Čapek’s 1920’s science-fiction play Rossum’s Universal Robots (which introduced the word “robot” to the English language), composer Nicole Lizée’s and writer Nicolas Billon’s R.U.R. A Torrent of Light grapples with one of our generation’s most fascinating questions. [emphasis mine]

So, what is the fascinating question? The answer is here in a March 7, 2022 OCAD news release,

Last Wednesday [March 2, 2022], OCAD U’s Great Hall at 100 McCaul St. was filled with all manner of sound making objects. Drum kits, gongs, chimes, typewriters and most exceptionally, a cello bow that produces bird sounds when glided across any surface were being played while musicians, dancers and opera singers moved among them.  

All were abuzz preparing for Tapestry Opera’s new production, R.U.R. A Torrent of Light, which will be presented this spring in collaboration with OCAD University. 

An immersive, site-specific experience, the new chamber opera explores humanity’s relationship to technology. [emphasis mine] Inspired by Karel Čapek’s 1920s science-fiction play Rossum’s Universal Robots, this latest version is set 20 years in the future when artificial intelligence (AI) has become fully sewn into our everyday lives and is set in the offices of a fictional tech company.

Čapek’s original script brought the word robot into the English language and begins in a factory that manufactures artificial people. Eventually these entities revolt and render humanity extinct.  

The innovative adaptation will be a unique addition to Tapestry Opera’s more than 40-year history of producing operatic stage performances. It is the only company in the country dedicated solely to the creation and performance of original Canadian opera. 

The March 7, 2022 OCAD news release goes on to describe the Social Body Lab’s involvement,

OCAD U’s Social Body Lab, whose mandate is to question the relationship between humans and technology, is helping to bring Tapestry’s vision of the not-so-distant future to the stage. Director of the Lab and Associate Professor in the Faculty of Arts & Science, Kate Hartman, along with Digital Futures Associate Professors Nick Puckett and Dr. Adam Tindale have developed wearable technology prototypes that will be integrated into the performers’ costumes. They have collaborated closely with the opera’s creative team to embrace the possibilities innovative technologies can bring to live performance. 

“This collaboration with Tapestry Opera has been incredibly unique and productive. Working in dialogue with their designers has enabled us to translate their ideas into cutting edge technological objects that we would have never arrived at individually,” notes Professor Puckett. 

The uncanny bow that was being tested last week is one of the futuristic devices that will be featured in the performance and is the invention of Dr. Tindale, who is himself a classically trained musician. He has also developed a set of wearable speakers for R.U.R. A Torrent of Light that when donned by the dancers will allow sound to travel across the stage in step with their choreography. 

Hartman and Puckett, along with the production’s costume, light and sound designers, have developed an LED-based prototype that will be worn around the necks of the actors who play robots and will be activated using WIFI. These collar pieces will function as visual indicators to the audience of various plot points, including the moments when the robots receive software updates.  

“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design,” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director. 

“New music and theatre are perfect canvases for iterative experimentation. We look forward to the unique fruits of this collaboration and future ones,” he continues. 

Unfortunately, I cannot find a preview but there is this video highlighting the technology being used in the opera (there are three other videos highlighting the choreography, the music, and the story, respectively, if you scroll about 40% down this page),


As I promised, here are the logistics,

University address:

OCAD University
100 McCaul Street,
Toronto, Ontario, Canada, M5T 1W1

Performance venue:

The Great Hall at OCAD University
Level 2, beside the Anniversary Gallery

Ticket prices:

The following seating sections are available for this performance. Tickets are from $10 to $100. All tickets are subject to a $5 transaction fee.

Orchestra Centre
Orchestra Sides
Orchestra Rear
Balcony (standing room)

Performances:

May 28 at 8:00 pm

May 29 at 4:00 pm

June 01 at 8:00 pm

June 02 at 8:00 pm

June 03 at 8:00 pm

June 04 at 8:00 pm

June 05 at 4:00 pm

Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage offers a link to buy tickets but it lands on a page that doesn’t seem to be functioning properly. I have contacted (as of Tuesday, May 24, 2022 at about 10:30 am PT) the Tapestry Opera folks to let them know about the problem. Hopefully soon, I will be able to update this page when they’ve handled the issue.

ETA May 30, 2022: You can buy tickets here. There are tickets available for only two of the performances left, Thursday, June 2, 2022 at 8 pm and Sunday, June 5, 2022 at 4 pm.

Public relations practitioners and artificial intelligence (AI)

A December 2, 2021 news item on phys.org sheds light on an AI topic new to me,

A new research report from the Chartered Institute of Public Relations’ AIinPR Panel, which has been co-authored by the University’s [of Huddersfield] Emeritus Professor of Corporate Communication Anne Gregory, has found that practitioners see the huge potential that artificial intelligence (AI) and Big Data offers the profession but possess limited knowledge on technical aspects of both.

A December ?, 2021 University of Huddersfield press release (also on the Chartered Institute of Public Relations [CIPR] website but dated November 23, 2021), which originated the news item, offers a summary of the report’s results,

The ‘AI and Big Data Readiness Report – Assessing the Public Relations Profession’s Preparedness for an AI Future’ research provides an overview of current AI understanding and preparedness within public relations and outlines how the profession should equip itself to exploit the potential and guard against the possible dangers of AI. 

“We need to get a strategic grip and determine for ourselves what our enhanced role and contribution can be in the organisations we serve. Otherwise, others will make the decision for us and it won’t be in our favour. This Report serves as the wake-up call.” [said] Professor Anne Gregory

It finds a significant number of PR practitioners have limited knowledge of AI and lack confidence in using it (43.2%), compared with only a small number who feel “very comfortable” (13.9%). However, practitioners are optimistic and have an eagerness to learn. Their challenge is they do not know what they need to know and they don’t know where to start. 

The report finds: 

41.5% of respondents claim to understand what AI as a technology means but do not consider themselves technical

Over one in three (38.9%) PR practitioners feel ‘excited’ about AI compared to just 3.9% who feel ‘overwhelmed’

30% of practitioners are familiar with AI technology but don’t feel confident to apply their knowledge to their role

One in five practitioners (20.7%) feel very comfortable using data and analytics in their role compared to just 8.2% of those who feel the same about AI

Around one in five practitioners are familiar with the relevance of both AI and Big Data on the communication profession

It would have been nice if the authors had included a little more detail about the previous research so as to better understand this report’s results.

As for the ‘AI and Big Data Readiness Report – Assessing the Public Relations Profession’s Preparedness for an AI Future’ report itself, I wish the authors had delved further into what the “41.5% of respondents claim to understand …” actually do understand about AI technology.

One last note, I was glad to see that the topic of ethics was also included in the survey.

Neuromorphic (brainlike) computing inspired by sea slugs

The sea slug has taught neuroscientists the intelligence features that any creature in the animal kingdom needs to survive. Now, the sea slug is teaching artificial intelligence how to use those strategies. Pictured: Aplysia californica. (Image by NOAA Monterey Bay National Marine Sanctuary/Chad King.)

I don’t think I’ve ever seen a picture of a sea slug before. Its appearance reminds me of its terrestrial cousin.

As for some of the latest news on brainlike computing, a December 7, 2021 news item on Nanowerk makes an announcement from the Argonne National Laboratory (a US Department of Energy laboratory; Note: Links have been removed),

A team of scientists has discovered a new material that points the way toward more efficient artificial intelligence hardware for everything from self-driving cars to surgical robots.

For artificial intelligence (AI) to get any smarter, it needs first to be as intelligent as one of the simplest creatures in the animal kingdom: the sea slug.

A new study has found that a material can mimic the sea slug’s most essential intelligence features. The discovery is a step toward building hardware that could help make AI more efficient and reliable for technology ranging from self-driving cars and surgical robots to social media algorithms.

The study, published in the Proceedings of the National Academy of Sciences [PNAS] (“Neuromorphic learning with Mott insulator NiO”), was conducted by a team of researchers from Purdue University, Rutgers University, the University of Georgia and the U.S. Department of Energy’s (DOE) Argonne National Laboratory. The team used the resources of the Advanced Photon Source (APS), a DOE Office of Science user facility at Argonne.

A December 6, 2021 Argonne National Laboratory news release (also on EurekAlert) by Kayla Wiles and Andre Salles, which originated the news item, provides more detail,

“Through studying sea slugs, neuroscientists discovered the hallmarks of intelligence that are fundamental to any organism’s survival,” said Shriram Ramanathan, a Purdue professor of Materials Engineering. ​“We want to take advantage of that mature intelligence in animals to accelerate the development of AI.”

Two main signs of intelligence that neuroscientists have learned from sea slugs are habituation and sensitization. Habituation is getting used to a stimulus over time, such as tuning out noises when driving the same route to work every day. Sensitization is the opposite — it’s reacting strongly to a new stimulus, like avoiding bad food from a restaurant.

AI has a really hard time learning and storing new information without overwriting information it has already learned and stored, a problem that researchers studying brain-inspired computing call the ​“stability-plasticity dilemma.” Habituation would allow AI to ​“forget” unneeded information (achieving more stability) while sensitization could help with retaining new and important information (enabling plasticity).

In this study, the researchers found a way to demonstrate both habituation and sensitization in nickel oxide, a quantum material. Quantum materials are engineered to take advantage of features available only at nature’s smallest scales, and useful for information processing. If a quantum material could reliably mimic these forms of learning, then it may be possible to build AI directly into hardware. And if AI could operate both through hardware and software, it might be able to perform more complex tasks using less energy.

“We basically emulated experiments done on sea slugs in quantum materials toward understanding how these materials can be of interest for AI,” Ramanathan said.

Neuroscience studies have shown that the sea slug demonstrates habituation when it stops withdrawing its gill as much in response to tapping. But an electric shock to its tail causes its gill to withdraw much more dramatically, showing sensitization.

For nickel oxide, the equivalent of a ​“gill withdrawal” is an increased change in electrical resistance. The researchers found that repeatedly exposing the material to hydrogen gas causes nickel oxide’s change in electrical resistance to decrease over time, but introducing a new stimulus like ozone greatly increases the change in electrical resistance.

Ramanathan and his colleagues used two experimental stations at the APS to test this theory, using X-ray absorption spectroscopy. A sample of nickel oxide was exposed to hydrogen and oxygen, and the ultrabright X-rays of the APS were used to see changes in the material at the atomic level over time.

“Nickel oxide is a relatively simple material,” said Argonne physicist Hua Zhou, a co-author on the paper who worked with the team at beamline 33-ID. ​“The goal was to use something easy to manufacture, and see if it would mimic this behavior. We looked at whether the material gained or lost a single electron after exposure to the gas.”

The research team also conducted scans at beamline 29-ID, which uses softer X-rays to probe different energy ranges. While the harder X-rays of 33-ID are more sensitive to the ​“core” electrons, those closer to the nucleus of the nickel oxide’s atoms, the softer X-rays can more readily observe the electrons on the outer shell. These are the electrons that define whether a material is conductive or resistive to electricity.

“We’re very sensitive to the change of resistivity in these samples,” said Argonne physicist Fanny Rodolakis, a co-author on the paper who led the work at beamline 29-ID. ​“We can directly probe how the electronic states of oxygen and nickel evolve under different treatments.”

Physicist Zhan Zhang and postdoctoral researcher Hui Cao, both of Argonne, contributed to the work, and are listed as co-authors on the paper. Zhang said the APS is well suited for research like this, due to its bright beam that can be tuned over different energy ranges.

For practical use of quantum materials as AI hardware, researchers will need to figure out how to apply habituation and sensitization in large-scale systems. They also would have to determine how a material could respond to stimuli while integrated into a computer chip.

This study is a starting place for guiding those next steps, the researchers said. Meanwhile, the APS is undergoing a massive upgrade that will not only increase the brightness of its beams by up to 500 times, but will allow for those beams to be focused much smaller than they are today. And this, Zhou said, will prove useful once this technology does find its way into electronic devices.

“If we want to test the properties of microelectronics,” he said, ​“the smaller beam that the upgraded APS will give us will be essential.”

In addition to the experiments performed at Purdue and Argonne, a team at Rutgers University performed detailed theory calculations to understand what was happening within nickel oxide at a microscopic level to mimic the sea slug’s intelligence features. The University of Georgia measured conductivity to further analyze the material’s behavior.

A version of this story was originally published by Purdue University

About the Advanced Photon Source

The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

This research used resources of the Advanced Photon Source, a U.S. DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.

You can find the September 24, 2021 Purdue University story, Taking lessons from a sea slug, study points to better hardware for artificial intelligence here.

Here’s a link to and a citation for the paper,

Neuromorphic learning with Mott insulator NiO by Zhen Zhang, Sandip Mondal, Subhasish Mandal, Jason M. Allred, Neda Alsadat Aghamiri, Alireza Fali, Zhan Zhang, Hua Zhou, Hui Cao, Fanny Rodolakis, Jessica L. McChesney, Qi Wang, Yifei Sun, Yohannes Abate, Kaushik Roy, Karin M. Rabe, and Shriram Ramanathan. PNAS September 28, 2021 118 (39) e2017239118 DOI: https://doi.org/10.1073/pnas.2017239118

This paper is behind a paywall.

Art in the Age of Planetary Consciousness; an April 22, 2022 talk in Venice (Italy) and online (+ an April 21/22, 2022 art/sci event)

The Biennale Arte (also known as the Venice Biennale) 2022: The Milk of Dreams runs from April 23 -November 27, 2022 with pre-openings on April 20, 21, and 22.

As part of the Biennale’s pre-opening, the ArtReview (international contemporary art magazine) and the Berggruen Institute (think tank with headquarters in Los Angeles, California) are presenting a talk on April 22, 2022, from the Talk on Art in the Age of Planetary Consciousness on the artreview.com website (Note: I cannot find an online portal so I’m guessing this is in person only),

Join the artists and ArtReview’s Mark Rappolt for this panel discussion – the first in a new series of talks in collaboration with Berggruen Arts – on 22 April 2022 at Casa dei Tre Oci, Venice

We live in an age in which we increasingly recognise and acknowledge that the human-made world and non-human worlds overlap and interact. In which actions cause reactions in a system that is increasingly planetary in scale while being susceptible to change by the actions of individual and collective agents. How does this change the way in which we think about art? And the ways in which we think about making art? Does it exist apart or as a part of this new consciousness and world view? Does art reflect such systems or participate within them? Or both?

This discussion between artists Shubigi Rao and Wu Tsang,who will both be showing new works at the 59th Venice Biennale, is the first in a new programme of events in which ArtReview is partnering with the Berggruen Institute to explore the intersections of philosophy, science and culture [emphasis mine] – as well as celebrating Casa dei Tre Oci in Venice as a gathering place for artists, curators, artlovers and thinkers. The conversation is chaired by ArtReview editor-in-chief Mark Rappolt.

Venue: Casa dei Tre Oci, Venice

Date: 22 April [2022]

Time: Entry from 4.30pm, talk to commence 5pm [Central European Summer Time, for PT subtract 9 hours]

Moderator: Mark Rappolt, Editor-in-Chief ArtReview & ArtReview Asia

Speakers: Shubigi Rao, Wu Tsang

RSVP: rsvp@artreview.com

About the artists:

Artist and writer Shubigi Rao’s interests include libraries, archival systems, histories and lies, literature and violence, ecologies, and natural history. Her art, texts, films, and photographs look at current and historical flashpoints as perspectival shifts to examining contemporary crises of displacement, whether of people, languages, cultures, or knowledge bodies. Her current decade-long project, Pulp: A Short Biography of the Banished Book is about the history of book destruction and the future of knowledge. In 2020, the second book from the project won the Singapore Literature Prize (non-fiction), while the first volume was shortlisted in 2018. Both books have won numerous awards, including AIGA (New York)’s 50 best books of 2016, and D&AD Pencil for design. The first exhibition of the project, Written in the Margins, won the APB Signature Prize 2018 Juror’s Choice Award. She is currently the Curator for the upcoming Kochi-Muziris Biennale. She will be representing Singapore at the 59th Venice Biennale.

Wu Tsang is an award-winning filmmaker and visual artist. Tsang’s work crosses genres and disciplines, from narrative and documentary films to live performance and video installations. Tsang is a MacArthur ‘Genius’ Fellow, and her projects have been presented at museums, biennials, and film festivals internationally. Awards include 2016 Guggenheim Fellow (Film/Video), 2018 Hugo Boss Prize Nominee, Creative Capital, Rockefeller Foundation, Louis Comfort Tiffany Foundation, and Warhol Foundation. Tsang received her BFA (2004) from the Art Institute of Chicago (SAIC) and an MFA (2010) from University of California Los Angeles (UCLA). Currently Tsang works in residence at Schauspielhaus Zurich, as a director of theatre with the collective Moved by the Motion. Her work is included in the 59th Venice Biennale’s central exhibition The Milk of Dreams, curated by Cecilia Alemani. On 20 April, TBA21–Academy in collaboration with The Hartwig Art Foundation presents the Italian premiere of Moby Dick; or, The Whale, the Wu Tsang-directed feature-length silent film with a live symphony orchestra, at Venice’s Teatro Goldoni.

I’m not sure how this talk will “explore the intersections of philosophy, science and culture.” I can make a case for philosophy and culture but not science. At any rate, the it serves as an introduction to the Berggruen Institute’s new activities in Europe, from the Talk on Art in the Age of Planetary Consciousness on the artreview.com website,

The Berggruen Institute – headquartered in Los Angeles – was established in 2010 to develop foundational ideas about how to reshape political and social institutions in a time of great global change. It recently acquired Casa dei Tre Oci in Venice as a new base for its European activities. The neo-gothic building, originally designed as a home and studio by the artist Mario de Maria, will serve as a space for global dialogue and new ideas, via a range of workshops, symposia and exhibitions in the visual arts and architecture.

In a further expansion of activity, the initiative Berggruen Arts & Culture has been launched with the acquisition of the historic Palazzo Diedo in Venice’s Cannaregio district. The site will host exhibitions as well as a residency programme (with Sterling Ruby named as the inaugural artist-in-residence). Curator Mario Codognato has been appointed artistic director of the initiative; the architect Silvio Fassi will oversee the palazzo’s renovation, which is scheduled to open in 2024.

Having been most interested in the Berggruen Institute (founded by Nicolas Berggruen) and its events, I’ve missed the arts and culture aspect of the Berggruen enterprise. Mark Westall’s March 15, 2022 article for FAD magazine gives some insight into Berggruen’s Venice arts and culture adventure,

In the most recent of his initiatives to encourage the work of today’s artists, deepen the connection between contemporary art and the past, and make art more widely accessible to the public, philanthropist Nicolas Berggruen today [March 15, 2022] announced the creation of Berggruen Arts & Culture and the acquisition of the historic Palazzo Diedo by the Nicolas Berggruen Charitable Trust in Venice’s Cannaregio district, which is being restored and renovated to serve as a base for this multi-faceted, international program and its activities in Venice and around the world.

At Palazzo Diedo, Berggruen Arts & Culture will host an array of exhibitions—some drawn from Nicolas Berggruen’s personal collection—as well as installations, symposia, and an artist-in-residence program that will foster the creation of art in Venice. To bring the palazzo to life during the renovation phase and make its new role visible to the public, Berggruen Arts & Culture has named Sterling Ruby as its inaugural artist-in-residence. Ruby will create A Project in Four Acts, a multi-year installation at Palazzo Diedo, with the first element debuting on April 20, 2022, and on view through the duration of the 59th Biennale Arte.

Internationally renowned contemporary art curator Mario Codognato, who has served as chief curator of MADRE in Naples and director of the Anish Kapoor Foundation in Venice [I have more on Anish Kapoor later], has been named the artistic director of Berggruen Arts & Culture. Venetian architect Silvio Fassi is overseeing the renovation of the palazzo, which will open officially in 2024, concurrent with the Biennale di Venezia.

Nicolas Berggruen’s initiatives in the visual arts and culture have spanned the traditional and the experimental. As a representative of a family that is legendary in the field of 20th-century European art, he has been instrumental in expanding the programming and curatorial autonomy of the Museum Berggruen, which has been a component of the Nationalgalerie in Berlin since 2000. As founder of the Berggruen Institute, he has spearheaded the expansion of the Institute with a presence in Los Angeles, Beijing, and Venice. He has supported Institute-led projects pairing leading contemporary artists including Anicka Yi, Ian Cheng, Rob Reynolds, Agnieszka Kurant, Pierre Huyghe, and Nancy Baker Cahill with researchers in artificial intelligence and biology, to create works exploring our changing ideas of what it means to be human.

Palazzo Diedo is the second historic building that the Nicolas Berggruen Charitable Trust has acquired in Venice, following the purchase of Casa dei Tre Oci on the Giudecca as the principal European base for the Berggruen Institute. In April and June 2022, Berggruen Arts & Culture will present a series of artist conversations in partnership with ArtReview at Casa dei Tre Oci. Berggruen Arts & Culture will also undertake activities such as exhibitions, discussions, lectures, and residencies at sites beyond Palazzo Diedo and Casa dei Tre Oci, such as Museum Berggruen in Berlin and the Berggruen Institute in Los Angeles.

For those of us not lucky enough to be in Venice for the opening of the 59th Biennale Arte, there’s this amusing story about Anish Kapoor and an artistic feud over the blackest black (a coating material made of carbon nanotubes) in my February 21, 2019 posting.

Art/sci and the Berggruen Institute

While the April 22, 2022 talk doesn’t directly address science issues vis-à-vis arts and culture, this upcoming Berggruen Institute/University of Southern California (USC) event does,

What Will Life Become?

Thursday, April 21 [2022] @ USC // Friday, April 22 [2022] @ Berggruen Institute // #WWLB

About

Biotechnologies that push the limits of life, artificial intelligences that can be trained to learn, and endeavors that envision life beyond Earth are among recent and anticipated technoscientific futures. Such projects unsettle theories and material realities of body, mind, species, and the planet. They prompt us to ask: How will we conjure positive human futures and future humans?

On Thursday, April 21 [2022] and Friday, April 22 [2022], the Berggruen Institute and the USC Dornsife Center on Science, Technology, and Public, together with philosophers, scientists, and artists, collaboratively and critically inquire:

What Will Life Become?

KEYNOTE CONVERSATION
“Speculative Worldbuilding”

PUBLIC FORUM
“What Will Life Become?”

PANELS
“Futures of Life”
“Futures of Mind”
“Futures in Outer Space”

WORKSHOP
“Embodied Futures”

VISION

The search for extraterrestrial biosignatures, human/machine cyborgian mashups, and dreams to facilitate reproduction beyond Earth are future-facing technologies. They complicate the purported thresholds, conditions, and boundaries of “the human,” “life,” and “the mind” — as if such categories have ever been stable. 

In concert with the Berggruen Institute’s newly launched Future Humans Program, What Will Life Become? invites philosophers, scientists, and artists to design and co-shape the human and more-than-human futures of life, the mind, and the planet.

Day 1 at USC Michelson Center for Convergent Bioscience 101 features a Keynote with director and speculative architect Liam Young who will discuss world-building through narrative and film with Nils Gilman; a Public Forum with leading scholars K Allado-McDowell, Neda Atanasoski, Lisa Ruth Rand, Tiffany Vora, moderated by Claire Isabel Webb, who will consider the question, “what will life become?” Reception to follow.

Day 2 at the Berggruen Institute features a three-part Salon: “Futures of Life,” “Futures of Mind,” and “Futures in Outer Space.” Conceptual artists Sougwen Chung*, Nancy Baker Cahill, REEPS100, Brian Cantrell, and ARSWAIN will unveil world premieres. “Embodied Futures” invites participants to imagine novel forms of life, mind, and being through artistic and intellectual provocations.

I have some details about how you can attend the programme in person or online,

DAY 1: USC

To participate in the Keynote Conversation and Public Forum on April 21, join us in person at USC Michelson Hall 101 or over YouTube beginning at 1:00 p.m [PT]. We’ll also send you the findings of the Workshop. Please register here.

DAY 2: BERGGRUEN INSTITUTE

This invite-only [mephasis mine] workshop at the Berggruen Institute Headquarters features a day of creating Embodied Futures. A three-panel salon, followed by the world premieres of art commissioned by the Institute, will provide provocations for the Possible Worlds exercises. Participants will imagine and design Future Relics and write letters to 2049. WWLB [What Will Life Become?] findings will be available online following the workshop.

*I will have more about Sougwen Chung and her work when I post my commentary on the exhibition running from March 5 – October 23, 2022 at the Vancouver Art Gallery, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence.”

Artificial intelligence (AI) designs “Giants of Nanotech” non-fungible tokens (NFTs)

Nanowerk, an agency which provides nanotechnology information and more, has commissioned a series of AI-designed non-fungible tokens representing two of the ‘Giants of Nanotechnology’, Richard Feynman and Sir Harold Kroto.

It’s a fundraising effort as noted here in an April 10, 2022 Nanowerk Spotlight article by website owner, Michael Berger,

We’ve spent a lot of time recently researching and writing the articles for our Smartworlder section here on Nanowerk – about cryptocurrencies, explaining blockchain, and many other aspects of smart technologies – for instance non-fungible tokens (NFTs). So, we thought: Why not go all the way and try this out ourselves?

As many organizations continue to push the boundaries as to what is possible within the web3 ecosystem, producing our first-ever collection of nanotechnology-themed digital art on the blockchain seemed like a natural extension for our brand and we hope that these NFT collectibles will be cherished by our reader community.

To start with, we created two inaugural Nanowerk NFT collections in a series we are calling Giants of Nanotech in order to honor the great minds of science in this field.

The digital artwork has been created using the artificial intelligence (AI) image creation algorithm Neural Style Transfer. This technique takes two images – a content image and a style reference image (such as an artwork by a famous painter) – and blends them together so the output image looks like the content image, but ‘painted’ in the style of the reference image.

For example, here is a video clip that shows how the AI transforms the Feynman content image into a painting inspired by Victor Nunnally’s Journey Man:

If you want to jump right into it, here are the Harry Kroto collection and the Richard Feynman collection on the OpenSea marketplace.

Have fun with our NFTs and please remember, your purchase helps fund Nanowerk and we are very grateful to you!

Also note: NFTs are an extremely volatile market. This article is not financial advice. Invest in the crypto and NFT market at your own risk. Only invest if you fully understand the potential risks.

I have a couple of comments. First, there’s Feynman’s status as a ‘Giant of Nanotechnology’. He is credited in the US as providing a foundational text (a 1959 lecture titled “There’s Plenty of Room at the Bottom”) for the field of nanotechnology. There has been some controversy over the lecture’s influence some of which has been covered in the Wikipedia entry titled, “There’s Plenty of Room at the Bottom.”

Second, Sir Harold Kroto won the 1996 Nobel Prize for Chemistry, along with two colleagues (all three were at Rice University in Texas), for the discovery of the buckminsterfullerene. Here’s more about that from the Richard E. Smalley, Robert F. Curl, and Harold W. Kroto essay on the Science History Institute website,

In 1996 three scientists, two American and one British, shared the Nobel Prize in Chemistry for their discovery of buckminsterfullerene (the “buckyball”) and other fullerenes. These “carbon cages” resembling soccer balls opened up a whole new field of chemical study with practical applications in materials science, electronics, and nanotechnology that researchers are only beginning to uncover.

With their discovery of buckminsterfullerene in 1985, Richard E. Smalley (1943–2005), Robert F. Curl (b. 1933), and Harold W. Kroto (1939–2016) furthered progress to the long-held objective of molecular-scale electronics and other nanotechnologies. …

Finally, good luck to Nanowerk and Michael Berger.

UNESCO’s first global recommendations on the ethics of artificial intelligence (AI) announced

This makes a nice accompaniment to my commentary (December 3, 2021 posting) on the Nature of Things programme (telecast by the Canadian Broadcasting Corporation), The Machine That Feels.

Here’s UNESCO’s (United Nations Educational, Scientific and Cultural Organization) November 25, 2021 press release making the announcement (also received via email),

UNESCO member states adopt the first ever global agreement [recommendation] on the Ethics of Artificial Intelligence

Paris, 25 Nov [2021] – Audrey Azoulay, Director-General of UNESCO presented
Thursday the first ever global standard on the ethics of artificial
intelligence adopted by the member states of UNESCO at the General
Conference.

This historical text defines the common values and principles which will
guide the construction of the necessary legal infrastructure to ensure
the healthy development of AI.

AI is pervasive, and enables many of our daily routines – booking
flights, steering driverless cars, and personalising our morning news
feeds. AI also supports the decision-making of governments and the
private sector.

AI technologies are delivering remarkable results in highly specialized
fields such as cancer screening and building inclusive environments for
people with disabilities. They also help combat global problems like
climate change and world hunger, and help reduce poverty by optimizing
economic aid.

But the technology is also bringing new unprecedented challenges. We see
increased gender and ethnic bias, significant threats to privacy,
dignity and agency, dangers of mass surveillance, and increased use of
unreliable AI technologies in law enforcement, to name a few. Until now,
there were no universal standards to provide an answer to these issues.

In 2018, Audrey Azoulay, Director-General of UNESCO, launched an
ambitious project: to give the world an ethical framework for the use of
artificial intelligence. Three years later, thanks to the mobilization
of hundreds of experts from around the world and intense international
negotiations, the 193 UNESCO’s member states have just officially
adopted this ethical framework.

“The world needs rules for artificial intelligence to benefit
humanity. The Recommendation on the ethics of AI is a major answer. It
sets the first global normative framework while giving States the
responsibility to apply it at their level. UNESCO will support its 193
Member States in its implementation and ask them to report regularly on
their progress and practices”, said Audrey Azoulay, UNESCO Director-General.

The content of the recommendation

The Recommendation [emphasis mine] aims to realize the advantages AI brings to society and reduce the risks it entails. It ensures that digital transformations
promote human rights and contribute to the achievement of the
Sustainable Development Goals, addressing issues around transparency,
accountability and privacy, with action-oriented policy chapters on data
governance, education, culture, labour, healthcare and the economy.

*Protecting data

The Recommendation calls for action beyond what tech firms and
governments are doing to guarantee individuals more protection by
ensuring transparency, agency and control over their personal data. It
states that individuals should all be able to access or even erase
records of their personal data. It also includes actions to improve data
protection and an individual’s knowledge of, and right to control,
their own data. It also increases the ability of regulatory bodies
around the world to enforce this.

*Banning social scoring and mass surveillance

The Recommendation explicitly bans the use of AI systems for social
scoring and mass surveillance. These types of technologies are very
invasive, they infringe on human rights and fundamental freedoms, and
they are used in a broad way. The Recommendation stresses that when
developing regulatory frameworks, Member States should consider that
ultimate responsibility and accountability must always lie with humans
and that AI technologies should not be given legal personality
themselves.

*Helping to monitor and evalute

The Recommendation also sets the ground for tools that will assist in
its implementation. Ethical Impact Assessment is intended to help
countries and companies developing and deploying AI systems to assess
the impact of those systems on individuals, on society and on the
environment. Readiness Assessment Methodology helps Member States to
assess how ready they are in terms of legal and technical
infrastructure. This tool will assist in enhancing the institutional
capacity of countries and recommend appropriate measures to be taken in
order to ensure that ethics are implemented in practice. In addition,
the Recommendation encourages Member States to consider adding the role
of an independent AI Ethics Officer or some other mechanism to oversee
auditing and continuous monitoring efforts.

*Protecting the environment

The Recommendation emphasises that AI actors should favour data, energy
and resource-efficient AI methods that will help ensure that AI becomes
a more prominent tool in the fight against climate change and on
tackling environmental issues. The Recommendation asks governments to
assess the direct and indirect environmental impact throughout the AI
system life cycle. This includes its carbon footprint, energy
consumption and the environmental impact of raw material extraction for
supporting the manufacturing of AI technologies. It also aims at
reducing the environmental impact of AI systems and data
infrastructures. It incentivizes governments to invest in green tech,
and if there are disproportionate negative impact of AI systems on the
environment, the Recommendation instruct that they should not be used.

Decisions impacting millions of people should be fair, transparent and contestable. These new technologies must help us address the major challenges in our world today, such as increased inequalities and the environmental crisis, and not deepening them.” said Gabriela Ramos, UNESCO’s Assistant Director General for Social and Human Sciences.

Emerging technologies such as AI have proven their immense capacity to
deliver for good. However, its negative impacts that are exacerbating an
already divided and unequal world, should be controlled. AI developments
should abide by the rule of law, avoiding harm, and ensuring that when
harm happens, accountability and redressal mechanisms are at hand for
those affected.

If I read this properly (and it took me a little while), this is an agreement on the nature of the recommendations themselves and not an agreement to uphold them.

You can find more background information about the process for developing the framework outlined in the press release on the Recommendation on the ethics of artificial intelligence webpage. I was curious as to the composition of the Adhoc Expert Group (AHEG) for the Recommendation; they had varied representation from every continent. (FYI, The US and Mexico represented North America.)