Tag Archives: University of Southern California (USC)

100-fold increase in AI energy efficiency

Most people don’t realize how much energy computing, streaming video, and other technologies consume and AI (artificial intelligence) consumes a lot. (For more about work being done in this area, there’s my October 13, 2023 posting about an upcoming ArtSci Salon event in Toronto featuring Laura U. Marks’s recent work ‘Streaming Carbon Footprint’ and my October 16, 2023 posting about how much water is used for AI.)

So this news is welcome, from an October 12, 2023 Northwestern University news release (also received via email and on EurekAlert), Note: Links have been removed,

AI just got 100-fold more energy efficient

Nanoelectronic device performs real-time AI classification without relying on the cloud

– AI is so energy hungry that most data analysis must be performed in the cloud
– New energy-efficient device enables AI tasks to be performed within wearables
– This allows real-time analysis and diagnostics for faster medical interventions
– Researchers tested the device by classifying 10,000 electrocardiogram samples
– The device successfully identified six types of heart beats with 95% accuracy

Northwestern University engineers have developed a new nanoelectronic device that can perform accurate machine-learning classification tasks in the most energy-efficient manner yet. Using 100-fold less energy than current technologies, the device can crunch large amounts of data and perform artificial intelligence (AI) tasks in real time without beaming data to the cloud for analysis.

With its tiny footprint, ultra-low power consumption and lack of lag time to receive analyses, the device is ideal for direct incorporation into wearable electronics (like smart watches and fitness trackers) for real-time data processing and near-instant diagnostics.

To test the concept, engineers used the device to classify large amounts of information from publicly available electrocardiogram (ECG) datasets. Not only could the device efficiently and correctly identify an irregular heartbeat, it also was able to determine the arrhythmia subtype from among six different categories with near 95% accuracy.

The research was published today (Oct. 12 [2023]) in the journal Nature Electronics.

“Today, most sensors collect data and then send it to the cloud, where the analysis occurs on energy-hungry servers before the results are finally sent back to the user,” said Northwestern’s Mark C. Hersam, the study’s senior author. “This approach is incredibly expensive, consumes significant energy and adds a time delay. Our device is so energy efficient that it can be deployed directly in wearable electronics for real-time detection and data processing, enabling more rapid intervention for health emergencies.”

A nanotechnology expert, Hersam is Walter P. Murphy Professor of Materials Science and Engineering at Northwestern’s McCormick School of Engineering. He also is chair of the Department of Materials Science and Engineering, director of the Materials Research Science and Engineering Center and member of the International Institute of Nanotechnology. Hersam co-led the research with Han Wang, a professor at the University of Southern California, and Vinod Sangwan, a research assistant professor at Northwestern.

Before machine-learning tools can analyze new data, these tools must first accurately and reliably sort training data into various categories. For example, if a tool is sorting photos by color, then it needs to recognize which photos are red, yellow or blue in order to accurately classify them. An easy chore for a human, yes, but a complicated — and energy-hungry — job for a machine.

For current silicon-based technologies to categorize data from large sets like ECGs, it takes more than 100 transistors — each requiring its own energy to run. But Northwestern’s nanoelectronic device can perform the same machine-learning classification with just two devices. By reducing the number of devices, the researchers drastically reduced power consumption and developed a much smaller device that can be integrated into a standard wearable gadget.

The secret behind the novel device is its unprecedented tunability, which arises from a mix of materials. While traditional technologies use silicon, the researchers constructed the miniaturized transistors from two-dimensional molybdenum disulfide and one-dimensional carbon nanotubes. So instead of needing many silicon transistors — one for each step of data processing — the reconfigurable transistors are dynamic enough to switch among various steps.

“The integration of two disparate materials into one device allows us to strongly modulate the current flow with applied voltages, enabling dynamic reconfigurability,” Hersam said. “Having a high degree of tunability in a single device allows us to perform sophisticated classification algorithms with a small footprint and low energy consumption.”

To test the device, the researchers looked to publicly available medical datasets. They first trained the device to interpret data from ECGs, a task that typically requires significant time from trained health care workers. Then, they asked the device to classify six types of heart beats: normal, atrial premature beat, premature ventricular contraction, paced beat, left bundle branch block beat and right bundle branch block beat.

The nanoelectronic device was able to identify accurately each arrhythmia type out of 10,000 ECG samples. By bypassing the need to send data to the cloud, the device not only saves critical time for a patient but also protects privacy.

“Every time data are passed around, it increases the likelihood of the data being stolen,” Hersam said. “If personal health data is processed locally — such as on your wrist in your watch — that presents a much lower security risk. In this manner, our device improves privacy and reduces the risk of a breach.”

Hersam imagines that, eventually, these nanoelectronic devices could be incorporated into everyday wearables, personalized to each user’s health profile for real-time applications. They would enable people to make the most of the data they already collect without sapping power.

“Artificial intelligence tools are consuming an increasing fraction of the power grid,” Hersam said. “It is an unsustainable path if we continue relying on conventional computer hardware.”

Here’s a link to and a citation for the paper,

Reconfigurable mixed-kernel heterojunction transistors for personalized support vector machine classification by Xiaodong Yan, Justin H. Qian, Jiahui Ma, Aoyang Zhang, Stephanie E. Liu, Matthew P. Bland, Kevin J. Liu, Xuechun Wang, Vinod K. Sangwan, Han Wang & Mark C. Hersam. Nature Electronics (2023) DOI: https://doi.org/10.1038/s41928-023-01042-7 Published: 12 October 2023

This paper is behind a paywall.

Combining silicon with metal oxide memristors to create powerful, low-energy intensive chips enabling AI in portable devices

In this one week, I’m publishing my first stories (see also June 13, 2023 posting “ChatGPT and a neuromorphic [brainlike] synapse“) where artificial intelligence (AI) software is combined with a memristor (hardware component) for brainlike (neuromorphic) computing.

Here’s more about some of the latest research from a March 30, 2023 news item on ScienceDaily,

Everyone is talking about the newest AI and the power of neural networks, forgetting that software is limited by the hardware on which it runs. But it is hardware, says USC [University of Southern California] Professor of Electrical and Computer Engineering Joshua Yang, that has become “the bottleneck.” Now, Yang’s new research with collaborators might change that. They believe that they have developed a new type of chip with the best memory of any chip thus far for edge AI (AI in portable devices).

A March 29, 2023 University of Southern California (USC) news release (also on EurekAlert), which originated the news item, contextualizes the research and delves further into the topic of neuromorphic hardware,

For approximately the past 30 years, while the size of the neural networks needed for AI and data science applications doubled every 3.5 months, the hardware capability needed to process them doubled only every 3.5 years. According to Yang, hardware presents a more and more severe problem for which few have patience. 

Governments, industry, and academia are trying to address this hardware challenge worldwide. Some continue to work on hardware solutions with silicon chips, while others are experimenting with new types of materials and devices.  Yang’s work falls into the middle—focusing on exploiting and combining the advantages of the new materials and traditional silicon technology that could support heavy AI and data science computation. 

Their new paper in Nature focuses on the understanding of fundamental physics that leads to a drastic increase in memory capacity needed for AI hardware. The team led by Yang, with researchers from USC (including Han Wang’s group), MIT [Massachusetts Institute of Technology], and the University of Massachusetts, developed a protocol for devices to reduce “noise” and demonstrated the practicality of using this protocol in integrated chips. This demonstration was made at TetraMem, a startup company co-founded by Yang and his co-authors  (Miao Hu, Qiangfei Xia, and Glenn Ge), to commercialize AI acceleration technology. According to Yang, this new memory chip has the highest information density per device (11 bits) among all types of known memory technologies thus far. Such small but powerful devices could play a critical role in bringing incredible power to the devices in our pockets. The chips are not just for memory but also for the processor. And millions of them in a small chip, working in parallel to rapidly run your AI tasks, could only require a small battery to power it. 

The chips that Yang and his colleagues are creating combine silicon with metal oxide memristors in order to create powerful but low-energy intensive chips. The technique focuses on using the positions of atoms to represent information rather than the number of electrons (which is the current technique involved in computations on chips). The positions of the atoms offer a compact and stable way to store more information in an analog, instead of digital fashion. Moreover, the information can be processed where it is stored instead of being sent to one of the few dedicated ‘processors,’ eliminating the so-called ‘von Neumann bottleneck’ existing in current computing systems.  In this way, says Yang, computing for AI is “more energy efficient with a higher throughput.”

How it works: 

Yang explains that electrons which are manipulated in traditional chips, are “light.” And this lightness, makes them prone to moving around and being more volatile.  Instead of storing memory through electrons, Yang and collaborators are storing memory in full atoms. Here is why this memory matters. Normally, says Yang, when one turns off a computer, the information memory is gone—but if you need that memory to run a new computation and your computer needs the information all over again, you have lost both time and energy.  This new method, focusing on activating atoms rather than electrons, does not require battery power to maintain stored information. Similar scenarios happen in AI computations, where a stable memory capable of high information density is crucial. Yang imagines this new tech that may enable powerful AI capability in edge devices, such as Google Glasses, which he says previously suffered from a frequent recharging issue.

Further, by converting chips to rely on atoms as opposed to electrons, chips become smaller.  Yang adds that with this new method, there is more computing capacity at a smaller scale. And this method, he says, could offer “many more levels of memory to help increase information density.” 

To put it in context, right now, ChatGPT is running on a cloud. The new innovation, followed by some further development, could put the power of a mini version of ChatGPT in everyone’s personal device. It could make such high-powered tech more affordable and accessible for all sorts of applications. 

Here’s a link to and a citation for the paper,

Thousands of conductance levels in memristors integrated on CMOS by Mingyi Rao, Hao Tang, Jiangbin Wu, Wenhao Song, Max Zhang, Wenbo Yin, Ye Zhuo, Fatemeh Kiani, Benjamin Chen, Xiangqi Jiang, Hefei Liu, Hung-Yu Chen, Rivu Midya, Fan Ye, Hao Jiang, Zhongrui Wang, Mingche Wu, Miao Hu, Han Wang, Qiangfei Xia, Ning Ge, Ju Li & J. Joshua Yang. Nature volume 615, pages 823–829 (2023) DOI: https://doi.org/10.1038/s41586-023-05759-5 Issue Date: 30 March 2023 Published: 29 March 2023

This paper is behind a paywall.

Nanobiotics and a new machine learning model

A May 16, 2022 news item on phys.org announces work on a new machine learning model that could be useful in the research into engineered nanoparticles for medical purposes (Note: Links have been removed),

With antibiotic-resistant infections on the rise and a continually morphing pandemic virus, it’s easy to see why researchers want to be able to design engineered nanoparticles that can shut down these infections.

A new machine learning model that predicts interactions between nanoparticles and proteins, developed at the University of Michigan, brings us a step closer to that reality.

A May 16, 2022 University of Michigan news release by Kate McAlpine, which originated the news item, delves further into the work (Note: Links have been removed),

“We have reimagined nanoparticles to be more than mere drug delivery vehicles. We consider them to be active drugs in and of themselves,” said J. Scott VanEpps, an assistant professor of emergency medicine and an author of the study in Nature Computational Science.

Discovering drugs is a slow and unpredictable process, which is why so many antibiotics are variations on a previous drug. Drug developers would like to design medicines that can attack bacteria and viruses in ways that they choose, taking advantage of the “lock-and-key” mechanisms that dominate interactions between biological molecules. But it was unclear how to transition from the abstract idea of using nanoparticles to disrupt infections to practical implementation of the concept. 

“By applying mathematical methods to protein-protein interactions, we have streamlined the design of nanoparticles that mimic one of the proteins in these pairs,” said Nicholas Kotov, the Irving Langmuir Distinguished University Professor of Chemical Sciences and Engineering and corresponding author of the study. 

“Nanoparticles are more stable than biomolecules and can lead to entirely new classes of antibacterial and antiviral agents.”

The new machine learning algorithm compares nanoparticles to proteins using three different ways to describe them. While the first was a conventional chemical description, the two that concerned structure turned out to be most important for making predictions about whether a nanoparticle would be a lock-and-key match with a specific protein.

Between them, these two structural descriptions captured the protein’s complex surface and how it might reconfigure itself to enable lock-and-key fits. This includes pockets that a nanoparticle could fit into, along with the size such a nanoparticle would need to be. The descriptions also included chirality, a clockwise or counterclockwise twist that is important for predicting how a protein and nanoparticle will lock in.

“There are many proteins outside and inside bacteria that we can target. We can use this model as a first screening to discover which nanoparticles will bind with which proteins,” said Emine Sumeyra Turali Emre, a postdoctoral researcher in chemical engineering and co-first author of the paper, along with Minjeong Cha, a PhD student in materials science and engineering.

Emre and Cha explained that researchers could follow up on matches identified by their algorithm with more detailed simulations and experiments. One such match could stop the spread of MRSA, a common antibiotic-resistant strain, using zinc oxide nanopyramids that block metabolic enzymes in the bacteria.  

“Machine learning algorithms like ours will provide a design tool for nanoparticles that can be used in many biological processes. Inhibition of the virus that causes COVID-19 is one good example,” said Cha. “We can use this algorithm to efficiently design nanoparticles that have broad-spectrum antiviral activity against all variants.”

This breakthrough was enabled by the Blue Sky Initiative at the University of Michigan College of Engineering. It provided $1.5 million to support the interdisciplinary team carrying out the fundamental exploration of whether a machine learning approach could be effective when data on the biological activity of nanoparticles is so sparse.

“The core of the Blue Sky idea is exactly what this work covers: finding a way to represent proteins and nanoparticles in a unified approach to understand and design new classes of drugs that have multiple ways of working against bacteria,” said Angela Violi, an Arthur F. Thurnau Professor, a professor of mechanical engineering and leader of the nanobiotics Blue Sky project.

Emre led the building of a database of interactions between proteins that could help to predict nanoparticle and protein interaction. Cha then identified structural descriptors that would serve equally well for nanoparticles and proteins, working with collaborators at the University of Southern California, Los Angeles to develop a machine learning algorithm that combed through the database and used the patterns it found to predict how proteins and nanoparticles would interact with one another. Finally, the team compared these predictions for lock-and-key matches with the results from experiments and detailed simulations, finding that they closely matched.

Additional collaborators on the project include Ji-Young Kim, a postdoctoral researcher in chemical engineering at U-M, who helped calculate chirality in the proteins and nanoparticles. Paul Bogdan and Xiongye Xiao, a professor and PhD student, respectively, in electrical and computer engineering at USC [University of Southern California] contributed to the graph theory descriptors. Cha then worked with them to design and train the neural network, comparing different machine learning models. All authors helped analyze the data.

Here are links to and a citation for the research briefing and paper, respectively,

Universal descriptors to predict interactions of inorganic nanoparticles with proteins. Nature Computational Science (2022) [Research briefing] DOI: https://doi.org/10.1038/s43588-022-00230-3 Published: 28 April 2022

This paper is behind a paywall.

Unifying structural descriptors for biological and bioinspired nanoscale complexes by Minjeong Cha, Emine Sumeyra Turali Emre, Xiongye Xiao, Ji-Young Kim, Paul Bogdan, J. Scott VanEpps, Angela Violi & Nicholas A. Kotov. Nature Computational Science volume 2, pages 243–252 (2022) Issue Date: April 2022 DOI: https://doi.org/10.1038/s43588-022-00229-w Published: 28 April 2022

This paper appears to be open access.

Antikythera: a new Berggruen Institute program and a 2,000 year old computer

Starting with the new Antikythera program at the Berggruen Institute before moving onto the Antikythera itself, one of my favourite scientific mysteries.

Antikythera program at the Berggruen Institute

An October 5, 2022 Berggruen Institute news release (also received via email) announces a program exploring the impact of planetary-scale computation and invites applications for the program’s first ‘studio’,

Antikythera is convening over 75 philosophers, technologists, designers, and scientists in seminars, design research studios, and global salons to create new models that shift computation toward more viable long-term futures: https://antikythera.xyz/

Applications are now open for researchers to join Antikythera’s fully-funded five month Studio in 2023, launching at the Berggruen Institute in Los Angeles: https://antikythera.xyz/apply/

Today [October 5, 2022] the Berggruen Institute announced that it will incubate Antikythera, an initiative focused on understanding and shaping the impact of computation on philosophy, global society, and planetary systems. Antikythera will engage a wide range of thinkers at the intersections of software, speculative thought, governance, and design to explore computation’s ultimate pitfalls and potentials. Research will range from the significance of machine intelligence and the geopolitics of AI to new economic models and the long-term project of composing a healthy planetary society.

“Against a background of rising geopolitical tensions and an accelerating climate crisis, technology has outpaced our theory. As such, we are less interested in applying philosophy to the topic of computation than generating new ideas from a direct encounter with it.” said Benjamin Bratton, Professor at the University of California, San Diego, and director of the new program. “The purpose of Antikythera is to reorient the question “what is computation for?” and to model what it may become. That is a project that is not only technological but also philosophical, political, and ecological.”

Antikythera will begin this exploration with its Studio program, applications for which are now open at antikythera.xyz/apply/. The Studio program will take place over five months in spring 2023 and bring together researchers from across the world to work in multidisciplinary teams. These teams will work on speculative design proposals, and join 75+ Affiliate Researchers for workshops, talks, and design sprints that inform thinking and propositions around Antikythera’s core research topics. Affiliate Researchers will include philosophers, technologists, designers, scientists, and other thinkers and practitioners. Applications for the program are due November 11, 2022.

Program project outcomes will include new combinations of theory, cinema, software, and policy. The five initial research themes animating this work are:

Synthetic Intelligence: the longer-term implications of machine intelligence, particularly as seen through the lens of artificial language

Hemispherical Stacks: the multipolar geopolitics of planetary computation

Recursive Simulations: the emergence of simulation as an epistemological technology, from scientific simulation to VR/AR

Synthetic Catallaxy: the ongoing organization of computational economics, pricing, and planning

Planetary Sapience: the evolutionary emergence of natural/artificial intelligence, and its role in composing a viable planetary condition

The program is named after the Antikythera Mechanism, the world’s first known computer, used more than 2,000 years ago to predict the movements of constellations and eclipses decades in advance. As an origin point for computation, it combined calculation, orientation and cosmology, dimensions of practice whose synergies may be crucial in setting our planetary future on a better course than it is on today.

Bratton continues, “The evolution of planetary intelligence has also meant centuries of destruction; its future must be radically different. We must ask, what future would make this past worth it? Taking the question seriously demands a different sort of speculative and practical philosophy and a corresponding sort of computation.”

Bratton is a philosopher of technology and Professor at the University of California, San Diego, and author of many books including The Stack: On Software and Sovereignty (MIT Press). His most recent book is The Revenge of the Real: Politics for a Post-Pandemic World (Verso Books), exploring the implications for political philosophy of COVID-19. Associate directors are Ben Cerveny, technologist, speculative designer, and director of the Amsterdam-based Foundation for Public Code, and Stephanie Sherman, strategist, writer, and director of the MA Narrative Environments program at Central St. Martins, London. The Studio is directed by architect and creative director Nicolay Boyadjiev.

In addition to the Studio, program activities will include a series of invitation-only planning salons inviting philosophers, designers, technologists, strategists, and others to discuss how to best interpret and intervene in the future of planetary-scale computation, and the historic philosophical and geopolitical force that it represents. These salons began in London in October 2022 and will continue in locations across the world including in Berlin; Amsterdam; Los Angeles; San Francisco; New York; Mexico City; Seoul; and Venice.

The announcement of Antikythera at the Berggruen Institute follows the recent spinoff of the Transformations of the Human school, successfully incubated at the Institute from 2017-2021.

“Computational technology covering the planet represents one of the largest and most urgent philosophical opportunities of our time,” said Nicolas Berggruen, Chairman and Co-Founder of the Berggruen Institute. “It is with great pleasure that we invite Antikythera to join our work at the Institute. Together, we can develop new ways of thinking to support planetary flourishing in the years to come.”

Web: Antikythera.xyz
Social: Antikythera_xyz on Twitter, Instagram, and Linkedin.
Email: contact@antikythera.xyz

Applications were opened on October, 4, 2022, the deadline is November 11, 2022 followed by interviews. Participants will be confirmed by December 11, 2022. Here are a few more details from the application portal,

Who should apply to the Studio?

Antikythera hopes to bring together a diverse cohort of researchers from different backgrounds, disciplines, perspectives, and levels of experience. The Antikythera research themes engage with global challenges that necessitate harnessing a diversity of thought and expertise. Anyone who is passionate about the research themes of the Antikythera program is strongly encouraged to apply. We accept applications from every discipline and background, from established to emerging researchers. Applicants do not need to meet any specific set of educational or professional experience.

Is the program free?

Yes, the program is free. You will be supported to cover the cost of housing, living expenses, and all program-related fieldwork travel along with a monthly stipend. Any other associated program costs will also be covered by the program.

Is the program in person and full-time?

Yes, the Studio program requires a full-time commitment (PhD students must also be on leave to participate). There is no part-time participation option. Though we understand this commitment may be challenging logistically for some individuals, we believe it is important for the Studio’s success. We will do our best to enable an environment that is comfortable and safe for participants from all backgrounds. Please do not hesitate to contact us if you may require any accommodations or have questions regarding the full-time, in-person nature of the program.

Do I need a Visa?

The Studio is a traveling program with time spent between the USA, Mexico, and South Korea. Applicable visa requirements set by these countries will apply and will vary depending on your nationality. We are aware that current visa appointment wait times may preclude some individuals who would require a brand new visa from being able to enter the US by January, and we are working to ensure access to the program for all (if not for January 2023, then for future Studio cohorts). We will therefore ask you to identify your country of origin and passport/visa status in the application form so we can work to enable your participation. Anyone who is passionate about the research themes of the Antikythera program is strongly encouraged to apply.

For those who like to put a face to a name, you can find out more about the program and the people behind it on this page.

Antikythera, a 2000 year old computer & 100 year old mystery

As noted in the Berggruen Institute news release, the Antikythera Mechanism is considered the world’s first computer (as far as we know). The image below is one of the best known illustrations of the device as visualized by researchers,

Exploded model of the Cosmos gearing of the Antikythera Mechanism. ©2020 Tony Freeth.

Briefly, the Antikythera mechanism was discovered at the turn of the twentieth century in 1901 by sponge divers off the coast of Greece. Philip Chrysopoulos’s September 21, 2022 article for The Greek Reporter gives more details in an exuberant style (Note: Links have been removed),

… now—more than 120 years later—the astounding machine has been recreated once again, using 3-D imagery, by a brilliant group of researchers from University College London (UCL).

Not only is the recreation a thing of great beauty and amazing genius, but it has also made possible a new understanding of how it worked.

Since only eighty-two fragments of the original mechanism are extant—comprising only one-third of the entire calculator—this left researchers stymied as to its full capabilities.

Until this moment [in 2020 according to the copyright for the image], the front of the mechanism, containing most of the gears, has been a bit of a Holy Grail for marine archeologists and astronomers.

Professor Tony Freeth says in an article published in the periodical Scientific Reports: “Ours is the first model that conforms to all the physical evidence and matches the descriptions in the scientific inscriptions engraved on the mechanism itself.”

“The sun, moon and planets are displayed in an impressive tour de force of ancient Greek brilliance,” Freeth said.

The largest surviving piece of the mechanism, referred to by researchers as “Fragment A,” has bearings, pillars, and a block. Another piece, known as “Fragment D,” has a mysterious disk along with an extraordinarily intricate 63-toothed gear and a plate.

The inscriptions—just discovered recently by researchers—on the back cover of the mechanism have a description of the cosmos and the planets, shown by beads of various colors, and move on rings set around the inscriptions.

By employing the information gleaned from recent x-rays of the computer and their knowledge of ancient Greek mathematics, the UCL researchers have now shown that they can demonstrate how the mechanism determined the cycles of the planets Venus and Saturn.

Evaggelos Vallianatos, author of many books on the Antikythera Mechanism writing at Greek Reporter said that it was much more than a mere mechanism. It was a sophisticated, mind-bogglingly complex astronomical computer, he said “and Greeks made it.”

They employed advanced astronomy, mathematics, metallurgy, and engineering to do so, constructing the astronomical device 2,200 years ago. These scientific facts of the computer’s age and its flowless high-tech nature profoundly disturbed some of the scientists who studied it.

A few Western scientists of the twentieth century were shocked by the Antikythera Mechanism, Vallianatos said. They called it an astrolabe for several decades and refused to call it a computer. The astrolabe, a Greek invention, is a useful instrument for calculating the position of the Sun and other prominent stars. Yet, its technology is rudimentary compared to that of the Antikythera device.

In 2015, Kyriakos Efstathiou, a professor of mechanical engineering at the Aristotle University of Thessaloniki and head of the group which studied the Antikythera Mechanism said: “All of our research has shown that our ancestors used their deep knowledge of astronomy and technology to construct such mechanisms, and based only on this conclusion, the history of technology should be re-written because it sets its start many centuries back.”

The professor further explained that the Antikythera Mechanism is undoubtedly the first machine of antiquity which can be classified by the scientific term “computer,” because “it is a machine with an entry where we can import data, and this machine can bring and create results based on a scientific mathematical scale.

In 2016, yet another astounding discovery was made when an inscription on the device was revealed—something like a label or a user’s manual for the device.

It included a discussion of the colors of eclipses, details used at the time in the making of astrological predictions, including the ability to see exact times of eclipses of the moon and the sun, as well as the correct movements of celestial bodies.

Inscribed numbers 76, 19 and 223 show maker “was a Pythagorean”

On one side of the device lies a handle that begins the movement of the whole system. By turning the handle and rotating the gauges in the front and rear of the mechanism, the user could set a date that would reveal the astronomical phenomena that would potentially occur around the Earth.

Physicist Yiannis Bitsakis has said that today the NASA [US National Aeronautics and Space Adiministration] website can detail all the eclipses of the past and those that are to occur in the future. However, “what we do with computers today, was done with the Antikythera Mechanism about 2000 years ago,” he said.

The stars and night heavens have been important to peoples around the world. (This September 18, 2020 posting highlights millennia old astronomy as practiced by indigenous peoples in North America, Australia, and elsewhere. There’s also this March 17, 2022 article “How did ancient civilizations make sense of the cosmos, and what did they get right?” by Susan Bell of University of Southern California on phys.org.)

I have covered the Antikythera in three previous postings (March 17, 2021, August 3, 2016, and October 2, 2012) with the 2021 posting being the most comprehensive and the one featuring Professor Tony Freeth’s latest breakthrough.

However, 2022 has blessed us with more as this April 11, 2022 article by Jennifer Ouellette for Ars Technica reveals (Note: Links have been removed)

The mysterious Antikythera mechanism—an ancient device believed to have been used for tracking the heavens—has fascinated scientists and the public alike since it was first recovered from a shipwreck over a century ago. Much progress has been made in recent years to reconstruct the surviving fragments and learn more about how the mechanism might have been used. And now, members of a team of Greek researchers believe they have pinpointed the start date for the Antikythera mechanism, according to a preprint posted to the physics arXiv repository. Knowing that “day zero” is critical to ensuring the accuracy of the device.

“Any measuring system, from a thermometer to the Antikythera mechanism, needs a calibration in order to [perform] its calculations correctly,” co-author Aristeidis Voulgaris of the Thessaloniki Directorate of Culture and Tourism in Greece told New Scientist. “Of course it wouldn’t have been perfect—it’s not a digital computer, it’s gears—but it would have been very good at predicting solar and lunar eclipses.”

Last year, an interdisciplinary team at University College London (UCL) led by mechanical engineer Tony Freeth made global headlines with their computational model, revealing a dazzling display of the ancient Greek cosmos. The team is currently building a replica mechanism, moving gears and all, using modern machinery. The display is described in the inscriptions on the mechanism’s back cover, featuring planets moving on concentric rings with marker beads as indicators. X-rays of the front cover accurately represent the cycles of Venus and Saturn—462 and 442 years, respectively. 

The Antikythera mechanism was likely built sometime between 200 BCE and 60 BCE. However, in February 2022, Freeth suggested that the famous Greek mathematician and inventor Archimedes (sometimes referred to as the Leonardo da Vinci of antiquity) may have actually designed the mechanism, even if he didn’t personally build it. (Archimedes died in 212 BCE at the hands of a Roman soldier during the siege of Syracuse.) There are references in the writings of Cicero (106-43 BCE) to a device built by Archimedes for tracking the movement of the Sun, Moon, and five planets; it was a prized possession of the Roman general Marcus Claudius Marcellus. According to Freeth, that description is remarkably similar to the Antikythera mechanism, suggesting it was not a one-of-a-kind device.

Voulgaris and his co-authors based their new analysis on a 223-month cycle called a Saros, represented by a spiral inset on the back of the device. The cycle covers the time it takes for the Sun, Moon, and Earth to return to their same positions and includes associated solar and lunar eclipses. Given our current knowledge about how the device likely functioned, as well as the inscriptions, the team believed the start date would coincide with an annular solar eclipse.

“This is a very specific and unique date [December 22, 178 BCE],” Voulgaris said. “In one day, there occurred too many astronomical events for it to be coincidence. This date was a new moon, the new moon was at apogee, there was a solar eclipse, the Sun entered into the constellation Capricorn, it was the winter solstice.”

Others have made independent calculations and arrived at a different conclusion: the calibration date would more likely fall sometime in the summer of 204 BCE, although Voulgaris countered that this doesn’t explain why the winter solstice is engraved so prominently on the device.

“The eclipse predictions on the [device’s back] contain enough astronomical information to demonstrate conclusively that the 18-year series of lunar and solar eclipse predictions started in 204 BCE,” Alexander Jones of New York University told New Scientist, adding that there have been four independent calculations of this. “The reason such a dating is possible is because the Saros period is not a highly accurate equation of lunar and solar periodicities, so every time you push forward by 223 lunar months… the quality of the prediction degrades.”

Read Ouellette’s April 11, 2022 article for a pretty accessible description of the work involved in establishing the date. Here’s a link to and a citation for the latest attempt to date the Antikythera,

The Initial Calibration Date of the Antikythera Mechanism after the Saros spiral mechanical Apokatastasis by Aristeidis Voulgaris, Christophoros Mouratidis, Andreas Vossinakis. arXiv > physics > arXiv:2203.15045 Submission history: From: Aristeidis Voulgaris Mr [view email] [v1] Mon, 28 Mar 2022 19:17:57 UTC (1,545 KB)

It’s open access. The calculations are beyond me otherwise, it’s quite readable.

Getting back to the Berggruen Institute and its Antikythera program/studio, good luck to all the applicants (the Antikythera application portal).

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects

To my imaginary AI friend

Dear friend,

I thought you might be amused by these Roomba-like* paintbots at the Vancouver Art Gallery’s (VAG) latest exhibition, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” (March 5, 2022 – October 23, 2022).

Sougwen Chung, Omnia per Omnia, 2018, video (excerpt), Courtesy of the Artist

*A Roomba is a robot vacuum cleaner produced and sold by iRobot.

As far as I know, this is the Vancouver Art Gallery’s first art/science or art/technology exhibit and it is an alternately fascinating, exciting, and frustrating take on artificial intelligence and its impact on the visual arts. Curated by Bruce Grenville, VAG Senior Curator, and Glenn Entis, Guest Curator, the show features 20 ‘objects’ designed to both introduce viewers to the ‘imitation game’ and to challenge them. From the VAG Imitation Game webpage,

The Imitation Game surveys the extraordinary uses (and abuses) of artificial intelligence (AI) in the production of modern and contemporary visual culture around the world. The exhibition follows a chronological narrative that first examines the development of artificial intelligence, from the 1950s to the present [emphasis mine], through a precise historical lens. Building on this foundation, it emphasizes the explosive growth of AI across disciplines, including animation, architecture, art, fashion, graphic design, urban design and video games, over the past decade. Revolving around the important roles of machine learning and computer vision in AI research and experimentation, The Imitation Game reveals the complex nature of this new tool and demonstrates its importance for cultural production.

And now …

As you’ve probably guessed, my friend, you’ll find a combination of both background information and commentary on the show.

I’ve initially focused on two people (a scientist and a mathematician) who were seminal thinkers about machines, intelligence, creativity, and humanity. I’ve also provided some information about the curators, which hopefully gives you some insight into the show.

As for the show itself, you’ll find a few of the ‘objects’ highlighted with one of them being investigated at more length. The curators devoted some of the show to ethical and social justice issues, accordingly, the Vancouver Art Gallery hosted the University of British Columbia’s “Speculative Futures: Artificial Intelligence Symposium” on April 7, 2022,

Presented in conjunction with the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, the Speculative Futures Symposium examines artificial intelligence and the specific uses of technology in its multifarious dimensions. Across four different panel conversations, leading thinkers of today will explore the ethical implications of technology and discuss how they are working to address these issues in cultural production.”

So, you’ll find more on these topics here too.

And for anyone else reading this (not you, my friend who is ‘strong’ AI and not similar to the ‘weak’ AI found in this show), there is a description of ‘weak’ and ‘strong’ AI on the avtsim.com/weak-ai-strong-ai webpage, Note: A link has been removed,

There are two types of AI: weak AI and strong AI.

Weak, sometimes called narrow, AI is less intelligent as it cannot work without human interaction and focuses on a more narrow, specific, or niched purpose. …

Strong AI on the other hand is in fact comparable to the fictitious AIs we see in media like the terminator. The theoretical Strong AI would be equivalent or greater to human intelligence.

….

My dear friend, I hope you will enjoy.

The Imitation Game and ‘mad, bad, and dangerous to know’

In some circles, it’s better known as ‘The Turing Test;” the Vancouver Art Gallery’s ‘Imitation Game’ hosts a copy of Alan Turing’s foundational paper for establishing whether artificial intelligence is possible (I thought this was pretty exciting).

Here’s more from The Turing Test essay by Graham Oppy and David Dowe for the Stanford Encyclopedia of Philosophy,

The phrase “The Turing Test” is most properly used to refer to a proposal made by Turing (1950) as a way of dealing with the question whether machines can think. According to Turing, the question whether machines can think is itself “too meaningless” to deserve discussion (442). However, if we consider the more precise—and somehow related—question whether a digital computer can do well in a certain kind of game that Turing describes (“The Imitation Game”), then—at least in Turing’s eyes—we do have a question that admits of precise discussion. Moreover, as we shall see, Turing himself thought that it would not be too long before we did have digital computers that could “do well” in the Imitation Game.

The phrase “The Turing Test” is sometimes used more generally to refer to some kinds of behavioural tests for the presence of mind, or thought, or intelligence in putatively minded entities. …

Next to the display holding Turing’s paper, is another display with an excerpt of an explanation from Turing about how he believed Ada Lovelace would have responded to the idea that machines could think based on a copy of some of her writing (also on display). She proposed that creativity, not thinking, is what set people apart from machines. (See the April 17, 2020 article “Thinking Machines? Has the Lovelace Test Been Passed?’ on mindmatters.ai.)

It’s like a dialogue between two seminal thinkers who lived about 100 years apart; Lovelace, born in 1815 and dead in 1852, and Turing, born in 1912 and dead in 1954. Both have fascinating back stories (more about those later) and both played roles in how computers and artificial intelligence are viewed.

Adding some interest to this walk down memory lane is a 3rd display, an illustration of the ‘Mechanical Turk‘, a chess playing machine that made the rounds in Europe from 1770 until it was destroyed in 1854. A hoax that fooled people for quite a while it is a reminder that we’ve been interested in intelligent machines for centuries. (Friend, Turing and Lovelace and the Mechanical Turk are found in Pod 1.)

Back story: Turing and the apple

Turing is credited with being instrumental in breaking the German ENIGMA code during World War II and helping to end the war. I find it odd that he ended up at the University of Manchester in the post-war years. One would expect him to have been at Oxford or Cambridge. At any rate, he died in 1954 of cyanide poisoning two years after he was arrested for being homosexual and convicted of indecency. Given the choice of incarceration or chemical castration, he chose the latter. There is, to this day, debate about whether or not it was suicide. Here’s how his death is described in this Wikipedia entry (Note: Links have been removed),

On 8 June 1954, at his house at 43 Adlington Road, Wilmslow,[150] Turing’s housekeeper found him dead. He had died the previous day at the age of 41. Cyanide poisoning was established as the cause of death.[151] When his body was discovered, an apple lay half-eaten beside his bed, and although the apple was not tested for cyanide,[152] it was speculated that this was the means by which Turing had consumed a fatal dose. An inquest determined that he had committed suicide. Andrew Hodges and another biographer, David Leavitt, have both speculated that Turing was re-enacting a scene from the Walt Disney film Snow White and the Seven Dwarfs (1937), his favourite fairy tale. Both men noted that (in Leavitt’s words) he took “an especially keen pleasure in the scene where the Wicked Queen immerses her apple in the poisonous brew”.[153] Turing’s remains were cremated at Woking Crematorium on 12 June 1954,[154] and his ashes were scattered in the gardens of the crematorium, just as his father’s had been.[155]

Philosopher Jack Copeland has questioned various aspects of the coroner’s historical verdict. He suggested an alternative explanation for the cause of Turing’s death: the accidental inhalation of cyanide fumes from an apparatus used to electroplate gold onto spoons. The potassium cyanide was used to dissolve the gold. Turing had such an apparatus set up in his tiny spare room. Copeland noted that the autopsy findings were more consistent with inhalation than with ingestion of the poison. Turing also habitually ate an apple before going to bed, and it was not unusual for the apple to be discarded half-eaten.[156] Furthermore, Turing had reportedly borne his legal setbacks and hormone treatment (which had been discontinued a year previously) “with good humour” and had shown no sign of despondency prior to his death. He even set down a list of tasks that he intended to complete upon returning to his office after the holiday weekend.[156] Turing’s mother believed that the ingestion was accidental, resulting from her son’s careless storage of laboratory chemicals.[157] Biographer Andrew Hodges theorised that Turing arranged the delivery of the equipment to deliberately allow his mother plausible deniability with regard to any suicide claims.[158]

The US Central Intelligence Agency (CIA) also has an entry for Alan Turing dated April 10, 2015 it’s titled, The Enigma of Alan Turing.

Back story: Ada Byron Lovelace, the 2nd generation of ‘mad, bad, and dangerous to know’

A mathematician and genius in her own right, Ada Lovelace’s father George Gordon Byron, better known as the poet Lord Byron, was notoriously described as ‘mad, bad, and dangerous to know’.

Lovelace too could have been been ‘mad, bad, …’ but she is described less memorably as “… manipulative and aggressive, a drug addict, a gambler and an adulteress, …” as mentioned in my October 13, 20215 posting. It marked the 200th anniversary of her birth, which was celebrated with a British Broadcasting Corporation (BBC) documentary and an exhibit at the Science Museum in London, UK.

She belongs in the Vancouver Art Gallery’s show along with Alan Turing due to her prediction that computers could be made to create music. She also published the first computer program. Her feat is astonishing when you know only one working model {1/7th of the proposed final size) of a computer was ever produced. (The machine invented by Charles Babbage was known as a difference engine. You can find out more about the Difference engine on Wikipedia and about Babbage’s proposed second invention, the Analytical engine.)

(Byron had almost nothing to do with his daughter although his reputation seems to have dogged her. You can find out more about Lord Byron here.)

AI and visual culture at the VAG: the curators

As mentioned earlier, the VAG’s “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” show runs from March 5, 2022 – October 23, 2022. Twice now, I have been to this weirdly exciting and frustrating show.

Bruce Grenville, VAG Chief/Senior Curator, seems to specialize in pulling together diverse materials to illustrate ‘big’ topics. His profile for Emily Carr University of Art + Design (where Grenville teaches) mentions these shows ,

… He has organized many thematic group exhibitions including, MashUp: The Birth of Modern Culture [emphasis mine], a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century; KRAZY! The Delirious World [emphasis mine] of Anime + Manga + Video Games + Art, a timely and important survey of modern and contemporary visual culture from around the world; Home and Away: Crossing Cultures on the Pacific Rim [emphasis mine] a look at the work of six artists from Vancouver, Beijing, Ho Chi Minh City, Seoul and Los Angeles, who share a history of emigration and diaspora. …

Glenn Entis, Guest Curator and founding faculty member of Vancouver’s Centre for Digital Media (CDM) is Grenville’s co-curator, from Entis’ CDM profile,

“… an Academy Award-winning animation pioneer and games industry veteran. The former CEO of Dreamworks Interactive, Glenn worked with Steven Spielberg and Jeffrey Katzenberg on a number of video games …,”

Steve Newton in his March 4, 2022 preview does a good job of describing the show although I strongly disagree with the title of his article which proclaims “The Vancouver Art Gallery takes a deep dive into artificial intelligence with The Imitation Game.” I think it’s more of a shallow dive meant to cover more distance than depth,

… The exhibition kicks off with an interactive introduction inviting visitors to actively identify diverse areas of cultural production influenced by AI.

“That was actually one of the pieces that we produced in collaboration with the Centre for Digital Media,” Grenville notes, “so we worked with some graduate-student teams that had actually helped us to design that software. It was the beginning of COVID when we started to design this, so we actually wanted a no-touch interactive. So, really, the idea was to say, ‘Okay, this is the very entrance to the exhibition, and artificial intelligence, this is something I’ve heard about, but I’m not really sure how it’s utilized in ways. But maybe I know something about architecture; maybe I know something about video games; maybe I know something about the history of film.

“So you point to these 10 categories of visual culture [emphasis mine]–video games, architecture, fashion design, graphic design, industrial design, urban design–so you point to one of those, and you might point to ‘film’, and then when you point at it that opens up into five different examples of what’s in the show, so it could be 2001: A Space Odyssey, or Bladerunner, or World on a Wire.”

After the exhibition’s introduction—which Grenville equates to “opening the door to your curiosity” about artificial intelligence–visitors encounter one of its main categories, Objects of Wonder, which speaks to the history of AI and the critical advances the technology has made over the years.

“So there are 20 Objects of Wonder [emphasis mine],” Grenville says, “which go from 1949 to 2022, and they kind of plot out the history of artificial intelligence over that period of time, focusing on a specific object. Like [mathematician and philosopher] Norbert Wiener made this cybernetic creature, he called it a ‘Moth’, in 1949. So there’s a section that looks at this idea of kind of using animals–well, machine animals–and thinking about cybernetics, this idea of communication as feedback, early thinking around neuroscience and how neuroscience starts to imagine this idea of a thinking machine.

And there’s this from Newton’s March 4, 2022 preview,

“It’s interesting,” Grenville ponders, “artificial intelligence is virtually unregulated. [emphasis mine] You know, if you think about the regulatory bodies that govern TV or radio or all the types of telecommunications, there’s no equivalent for artificial intelligence, which really doesn’t make any sense. And so what happens is, sometimes with the best intentions [emphasis mine]—sometimes not with the best intentions—choices are made about how artificial intelligence develops. So one of the big ones is facial-recognition software [emphasis mine], and any body-detection software that’s being utilized.

In addition to it being the best overview of the show I’ve seen so far, this is the only one where you get a little insight into what the curators were thinking when they were developing it.

A deep dive into AI?

it was only while searching for a little information before the show that I realized I don’t have any definitions for artificial intelligence! What is AI? Sadly, there are no definitions of AI in the exhibit.

It seems even experts don’t have a good definition. Take a look at this,

The definition of AI is fluid [emphasis mine] and reflects a constantly shifting landscape marked by technological advancements and growing areas of application. Indeed, it has frequently been observed that once AI becomes capable of solving a particular problem or accomplishing a certain task, it is often no longer considered to be “real” intelligence [emphasis mine] (Haenlein & Kaplan, 2019). A firm definition was not applied for this report [emphasis mine], given the variety of implementations described above. However, for the purposes of deliberation, the Panel chose to interpret AI as a collection of statistical and software techniques, as well as the associated data and the social context in which they evolve — this allows for a broader and more inclusive interpretation of AI technologies and forms of agency. The Panel uses the term AI interchangeably to describe various implementations of machine-assisted design and discovery, including those based on machine learning, deep learning, and reinforcement learning, except for specific examples where the choice of implementation is salient. [p. 6 print version; p. 34 PDF version]

The above is from the Leaps and Boundaries report released May 10, 2022 by the Council of Canadian Academies’ Expert Panel on Artificial Intelligence for Science and Engineering.

Sometimes a show will take you in an unexpected direction. I feel a lot better ‘not knowing’. Still, I wish the curators had acknowledged somewhere in the show that artificial intelligence is a slippery concept. Especially when you add in robots and automatons. (more about them later)

21st century technology in a 19th/20th century building

Void stairs inside the building. Completed in 1906, the building was later designated as a National Historic Site in 1980 [downloaded from https://en.wikipedia.org/wiki/Vancouver_Art_Gallery#cite_note-canen-7]

Just barely making it into the 20th century, the building where the Vancouver Art Gallery currently resides was for many years the provincial courthouse (1911 – 1978). In some ways, it’s a disconcerting setting for this show.

They’ve done their best to make the upstairs where the exhibit is displayed look like today’s galleries with their ‘white cube aesthetic’ and strong resemblance to the scientific laboratories seen in movies.

(For more about the dominance, since the 1930s, of the ‘white cube aesthetic’ in art galleries around the world, see my July 26, 2021 posting; scroll down about 50% of the way.)

It makes for an interesting tension, the contrast between the grand staircase, the cupola, and other architectural elements and the sterile, ‘laboratory’ environment of the modern art gallery.

20 Objects of Wonder and the flow of the show

It was flummoxing. Where are the 20 objects? Why does it feel like a maze in a laboratory? Loved the bees, but why? Eeeek Creepers! What is visual culture anyway? Where am I?

The objects of the show

It turns out that the curators have a more refined concept for ‘object’ than I do. There weren’t 20 material objects, there were 20 numbered ‘pods’ with perhaps a screen or a couple of screens or a screen and a material object or two illustrating the pod’s topic.

Looking up a definition for the word (accessed from a June 9, 2022 duckduckgo.com search). yielded this, (the second one seems à propos),

objectŏb′jĭkt, -jĕkt″

noun

1. Something perceptible by one or more of the senses, especially by vision or touch; a material thing.

2. A focus of attention, feeling, thought, or action.

3. A limiting factor that must be considered.

The American Heritage® Dictionary of the English Language, 5th Edition.

Each pod = a focus of attention.

The show’s flow is a maze. Am I a rat?

The pods are defined by a number and by temporary walls. So if you look up, you’ll see a number and a space partly enclosed by a temporary wall or two.

It’s a very choppy experience. For example, one minute you can be in pod 1 and, when you turn the corner, you’re in pod 4 or 5 or ? There are pods I’ve not seen, despite my two visits, because I kept losing my way. This led to an existential crisis on my second visit. “Had I missed the greater meaning of this show? Was there some sort of logic to how it was organized? Was there meaning to my life? Was I a rat being nudged around in a maze?” I didn’t know.

Thankfully, I have since recovered. But, I will return to my existential crisis later, with a special mention for “Creepers.”

The fascinating

My friend, you know I appreciated the history and in addition to Alan Turing, Ada Lovelace and the Mechanical Turk, at the beginning of the show, they included a reference to Ovid (or Pūblius Ovidius Nāsō), a Roman poet who lived from 43 BCE – 17/18 CE in one of the double digit (17? or 10? or …) in one of the pods featuring a robot on screen. As to why Ovid might be included, this excerpt from a February 12, 2018 posting on the cosmolocal.org website provides a clue (Note. Links have been removed),

The University of King’s College [Halifax, Nova Scotia] presents Automatons! From Ovid to AI, a nine-lecture series examining the history, issues and relationships between humans, robots, and artificial intelligence [emphasis mine]. The series runs from January 10 to April 4 [2018], and features leading scholars, performers and critics from Canada, the US and Britain.

“Drawing from theatre, literature, art, science and philosophy, our 2018 King’s College Lecture Series features leading international authorities exploring our intimate relationships with machines,” says Dr. Gordon McOuat, professor in the King’s History of Science and Technology (HOST) and Contemporary Studies Programs.

“From the myths of Ovid [emphasis mine] and the automatons [emphasis mine] of the early modern period to the rise of robots, cyborgs, AI and artificial living things in the modern world, the 2018 King’s College Lecture Series examines the historical, cultural, scientific and philosophical place of automatons in our lives—and our future,” adds McOuat.

I loved the way the curators managed to integrate the historical roots for artificial intelligence and, by extension, the world of automatons, robots, cyborgs, and androids. Yes, starting the show with Alan Turing and Ada Lovelace could be expected but Norbert Wiener’s Moth (1949) acts as a sort of preview for Sougwen Chung’s “Omnia per Omnia, 2018” (GIF seen at the beginning of this post). Take a look for yourself (from the cyberneticzoo.com September 19, 2009 posting by cyberne1. Do you see the similarity or am I the only one?

[sourced from Google images, Source:life) & downloaded from https://cyberneticzoo.com/cyberneticanimals/1949-wieners-moth-wiener-wiesner-singleton/]

Sculpture

This is the first time I’ve come across an AI/sculpture project. The VAG show features Scott Eaton’s sculptures on screens in a room devoted to his work.

Scott Eaton: Entangled II, 2019 4k video (still) Courtesy of the Artist [downloaded from https://www.vanartgallery.bc.ca/exhibitions/the-imitation-game]

This looks like an image of a piece of ginger root and It’s fascinating to watch the process as the AI agent ‘evolves’ Eaton’s drawings into onscreen sculptures. It would have enhanced the experience if at least one of Eaton’s ‘evolved’ and physically realized sculptures had been present in the room but perhaps there were financial and/or logistical reasons for the absence.

Both Chung and Eaton are collaborating with an AI agent. In Chung’s case the AI is integrated into the paintbots with which she interacts and paints alongside and in Eaton’s case, it’s via a computer screen. In both cases, the work is mildly hypnotizing in a way that reminds me of lava lamps.

One last note about Chung and her work. She was one of the artists invited to present new work at an invite-only April 22, 2022 Embodied Futures workshop at the “What will life become?” event held by the Berrgruen Institute and the University of Southern California (USC),

Embodied Futures invites participants to imagine novel forms of life, mind, and being through artistic and intellectual provocations on April 22 [2022].

Beginning at 1 p.m., together we will experience the launch of five artworks commissioned by the Berggruen Institute. We asked these artists: How does your work inflect how we think about “the human” in relation to alternative “embodiments” such as machines, AIs, plants, animals, the planet, and possible alien life forms in the cosmos? [emphases mine]  Later in the afternoon, we will take provocations generated by the morning’s panels and the art premieres in small breakout groups that will sketch futures worlds, and lively entities that might dwell there, in 2049.

This leads to (and my friend, while I too am taking a shallow dive, for this bit I’m going a little deeper):

Bees and architecture

Neri Oxman’s contribution (Golden Bee Cube, Synthetic Apiary II [2020]) is an exhibit featuring three honeycomb structures and a video featuring the bees in her synthetic apiary.

Neri Oxman and the MIT Mediated Matter Group, Golden Bee Cube, Synthetic Apiary II, 2020, beeswax, acrylic, gold particles, gold powder Courtesy of Neri Oxman and the MIT Mediated Matter Group

Neri Oxman (then a faculty member of the Mediated Matter Group at the Massachusetts Institute of Technology) described the basis for the first and all other iterations of her synthetic apiary in Patrick Lynch’s October 5, 2016 article for ‘ArchDaily; Broadcasting Architecture Worldwide’, Note: Links have been removed,

Designer and architect Neri Oxman and the Mediated Matter group have announced their latest design project: the Synthetic Apiary. Aimed at combating the massive bee colony losses that have occurred in recent years, the Synthetic Apiary explores the possibility of constructing controlled, indoor environments that would allow honeybee populations to thrive year-round.

“It is time that the inclusion of apiaries—natural or synthetic—for this “keystone species” be considered a basic requirement of any sustainability program,” says Oxman.

In developing the Synthetic Apiary, Mediated Matter studied the habits and needs of honeybees, determining the precise amounts of light, humidity and temperature required to simulate a perpetual spring environment. [emphasis mine] They then engineered an undisturbed space where bees are provided with synthetic pollen and sugared water and could be evaluated regularly for health.

In the initial experiment, the honeybees’ natural cycle proved to adapt to the new environment, as the Queen was able to successfully lay eggs in the apiary. The bees showed the ability to function normally in the environment, suggesting that natural cultivation in artificial spaces may be possible across scales, “from organism- to building-scale.”

“At the core of this project is the creation of an entirely synthetic environment enabling controlled, large-scale investigations of hives,” explain the designers.

Mediated Matter chose to research into honeybees not just because of their recent loss of habitat, but also because of their ability to work together to create their own architecture, [emphasis mine] a topic the group has explored in their ongoing research on biologically augmented digital fabrication, including employing silkworms to create objects and environments at product, architectural, and possibly urban, scales.

“The Synthetic Apiary bridges the organism- and building-scale by exploring a “keystone species”: bees. Many insect communities present collective behavior known as “swarming,” prioritizing group over individual survival, while constantly working to achieve common goals. Often, groups of these eusocial organisms leverage collaborative behavior for relatively large-scale construction. For example, ants create extremely complex networks by tunneling, wasps generate intricate paper nests with materials sourced from local areas, and bees deposit wax to build intricate hive structures.”

This January 19, 2022 article by Crown Honey for its eponymous blog updates Oxman’s work (Note 1: All emphases are mine; Note 2: A link has been removed),

Synthetic Apiary II investigates co-fabrication between humans and honey bees through the use of designed environments in which Apis mellifera colonies construct comb. These designed environments serve as a means by which to convey information to the colony. The comb that the bees construct within these environments comprises their response to the input information, enabling a form of communication through which we can begin to understand the hive’s collective actions from their perspective.

Some environments are embedded with chemical cues created through a novel pheromone 3D-printing process, while others generate magnetic fields of varying strength and direction. Others still contain geometries of varying complexity or designs that alter their form over time.

When offered wax augmented with synthetic biomarkers, bees appear to readily incorporate it into their construction process, likely due to the high energy cost of producing fresh wax. This suggests that comb construction is a responsive and dynamic process involving complex adaptations to perturbations from environmental stimuli, not merely a set of predefined behaviors building toward specific constructed forms. Each environment therefore acts as a signal that can be sent to the colony to initiate a process of co-fabrication.

Characterization of constructed comb morphology generally involves visual observation and physical measurements of structural features—methods which are limited in scale of analysis and blind to internal architecture. In contrast, the wax structures built by the colonies in Synthetic Apiary II are analyzed through high-throughput X-ray computed tomography (CT) scans that enable a more holistic digital reconstruction of the hive’s structure.

Geometric analysis of these forms provides information about the hive’s design process, preferences, and limitations when tied to the inputs, and thereby yields insights into the invisible mediations between bees and their environment.
Developing computational tools to learn from bees can facilitate the very beginnings of a dialogue with them. Refined by evolution over hundreds of thousands of years, their comb-building behaviors and social organizations may reveal new forms and methods of formation that can be applied across our human endeavors in architecture, design, engineering, and culture.

Further, with a basic understanding and language established, methods of co-fabrication together with bees may be developed, enabling the use of new biocompatible materials and the creation of more efficient structural geometries that modern technology alone cannot achieve.

In this way, we also move our built environment toward a more synergistic embodiment, able to be more seamlessly integrated into natural environments through material and form, even providing habitats of benefit to both humans and nonhumans. It is essential to our mutual survival for us to not only protect but moreover to empower these critical pollinators – whose intrinsic behaviors and ecosystems we have altered through our industrial processes and practices of human-centric design – to thrive without human intervention once again.

In order to design our way out of the environmental crisis that we ourselves created, we must first learn to speak nature’s language. …

The three (natural, gold nanoparticle, and silver nanoparticle) honeycombs in the exhibit are among the few physical objects (the others being the historical documents and the paintbots with their canvasses) in the show and it’s almost a relief after the parade of screens. It’s the accompanying video that’s eerie. Everything is in white, as befits a science laboratory, in this synthetic apiary where bees are fed sugar water and fooled into a spring that is eternal.

Courtesy: Massachusetts Institute of Technology Copyright: Mediated Matter [downloaded from https://www.media.mit.edu/projects/synthetic-apiary/overview/]

(You may want to check out Lynch’s October 5, 2016 article or Crown Honey’s January 19, 2022 article as both have embedded images and the Lynch article includes a Synthetic Apiary video. The image above is a still from the video.)

As I asked a friend, where are the flowers? Ron Miksha, a bee ecologist working at the University of Calgary, details some of the problems with Oxman’s Synthetic Apiary this way in his October 7, 2016 posting on his Bad Beekeeping Blog,

In a practical sense, the synthetic apiary fails on many fronts: Bees will survive a few months on concoctions of sugar syrup and substitute pollen, but they need a natural variety of amino acids and minerals to actually thrive. They need propolis and floral pollen. They need a ceiling 100 metres high and a 2-kilometre hallway if drone and queen will mate, or they’ll die after the old queen dies. They need an artificial sun that travels across the sky, otherwise, the bees will be attracted to artificial lights and won’t return to their hive. They need flowery meadows, fresh water, open skies. [emphasis mine] They need a better holodeck.

Dorothy Woodend’s March 10, 2022 review of the VAG show for The Tyee poses other issues with the bees and the honeycombs,

When AI messes about with other species, there is something even more unsettling about the process. American-Israeli artist Neri Oxman’s Golden Bee Cube, Synthetic Apiary II, 2020 uses real bees who are proffered silver and gold [nanoparticles] to create their comb structures. While the resulting hives are indeed beautiful, rendered in shades of burnished metal, there is a quality of unease imbued in them. Is the piece akin to apiary torture chambers? I wonder how the bees feel about this collaboration and whether they’d like to renegotiate the deal.

There’s no question the honeycombs are fascinating and disturbing but I don’t understand how artificial intelligence was a key factor in either version of Oxman’s synthetic apiary. In the 2022 article by Crown Honey, there’s this “Developing computational tools to learn from bees can facilitate the very beginnings of a dialogue with them [honeybees].” It’s probable that the computational tools being referenced include AI and the Crown Honey article seems to suggest those computational tools are being used to analyze the bees behaviour after the fact.

Yes, I can imagine a future where ‘strong’ AI (such as you, my friend) is in ‘dialogue’ with the bees and making suggestions and running the experiments but it’s not clear that this is the case currently. The Oxman exhibit contribution would seem to be about the future and its possibilities whereas many of the other ‘objects’ concern the past and/or the present.

Friend, let’s take a break, shall we? Part 2 is coming up.

Art in the Age of Planetary Consciousness; an April 22, 2022 talk in Venice (Italy) and online (+ an April 21/22, 2022 art/sci event)

The Biennale Arte (also known as the Venice Biennale) 2022: The Milk of Dreams runs from April 23 -November 27, 2022 with pre-openings on April 20, 21, and 22.

As part of the Biennale’s pre-opening, the ArtReview (international contemporary art magazine) and the Berggruen Institute (think tank with headquarters in Los Angeles, California) are presenting a talk on April 22, 2022, from the Talk on Art in the Age of Planetary Consciousness on the artreview.com website (Note: I cannot find an online portal so I’m guessing this is in person only),

Join the artists and ArtReview’s Mark Rappolt for this panel discussion – the first in a new series of talks in collaboration with Berggruen Arts – on 22 April 2022 at Casa dei Tre Oci, Venice

We live in an age in which we increasingly recognise and acknowledge that the human-made world and non-human worlds overlap and interact. In which actions cause reactions in a system that is increasingly planetary in scale while being susceptible to change by the actions of individual and collective agents. How does this change the way in which we think about art? And the ways in which we think about making art? Does it exist apart or as a part of this new consciousness and world view? Does art reflect such systems or participate within them? Or both?

This discussion between artists Shubigi Rao and Wu Tsang,who will both be showing new works at the 59th Venice Biennale, is the first in a new programme of events in which ArtReview is partnering with the Berggruen Institute to explore the intersections of philosophy, science and culture [emphasis mine] – as well as celebrating Casa dei Tre Oci in Venice as a gathering place for artists, curators, artlovers and thinkers. The conversation is chaired by ArtReview editor-in-chief Mark Rappolt.

Venue: Casa dei Tre Oci, Venice

Date: 22 April [2022]

Time: Entry from 4.30pm, talk to commence 5pm [Central European Summer Time, for PT subtract 9 hours]

Moderator: Mark Rappolt, Editor-in-Chief ArtReview & ArtReview Asia

Speakers: Shubigi Rao, Wu Tsang

RSVP: rsvp@artreview.com

About the artists:

Artist and writer Shubigi Rao’s interests include libraries, archival systems, histories and lies, literature and violence, ecologies, and natural history. Her art, texts, films, and photographs look at current and historical flashpoints as perspectival shifts to examining contemporary crises of displacement, whether of people, languages, cultures, or knowledge bodies. Her current decade-long project, Pulp: A Short Biography of the Banished Book is about the history of book destruction and the future of knowledge. In 2020, the second book from the project won the Singapore Literature Prize (non-fiction), while the first volume was shortlisted in 2018. Both books have won numerous awards, including AIGA (New York)’s 50 best books of 2016, and D&AD Pencil for design. The first exhibition of the project, Written in the Margins, won the APB Signature Prize 2018 Juror’s Choice Award. She is currently the Curator for the upcoming Kochi-Muziris Biennale. She will be representing Singapore at the 59th Venice Biennale.

Wu Tsang is an award-winning filmmaker and visual artist. Tsang’s work crosses genres and disciplines, from narrative and documentary films to live performance and video installations. Tsang is a MacArthur ‘Genius’ Fellow, and her projects have been presented at museums, biennials, and film festivals internationally. Awards include 2016 Guggenheim Fellow (Film/Video), 2018 Hugo Boss Prize Nominee, Creative Capital, Rockefeller Foundation, Louis Comfort Tiffany Foundation, and Warhol Foundation. Tsang received her BFA (2004) from the Art Institute of Chicago (SAIC) and an MFA (2010) from University of California Los Angeles (UCLA). Currently Tsang works in residence at Schauspielhaus Zurich, as a director of theatre with the collective Moved by the Motion. Her work is included in the 59th Venice Biennale’s central exhibition The Milk of Dreams, curated by Cecilia Alemani. On 20 April, TBA21–Academy in collaboration with The Hartwig Art Foundation presents the Italian premiere of Moby Dick; or, The Whale, the Wu Tsang-directed feature-length silent film with a live symphony orchestra, at Venice’s Teatro Goldoni.

I’m not sure how this talk will “explore the intersections of philosophy, science and culture.” I can make a case for philosophy and culture but not science. At any rate, the it serves as an introduction to the Berggruen Institute’s new activities in Europe, from the Talk on Art in the Age of Planetary Consciousness on the artreview.com website,

The Berggruen Institute – headquartered in Los Angeles – was established in 2010 to develop foundational ideas about how to reshape political and social institutions in a time of great global change. It recently acquired Casa dei Tre Oci in Venice as a new base for its European activities. The neo-gothic building, originally designed as a home and studio by the artist Mario de Maria, will serve as a space for global dialogue and new ideas, via a range of workshops, symposia and exhibitions in the visual arts and architecture.

In a further expansion of activity, the initiative Berggruen Arts & Culture has been launched with the acquisition of the historic Palazzo Diedo in Venice’s Cannaregio district. The site will host exhibitions as well as a residency programme (with Sterling Ruby named as the inaugural artist-in-residence). Curator Mario Codognato has been appointed artistic director of the initiative; the architect Silvio Fassi will oversee the palazzo’s renovation, which is scheduled to open in 2024.

Having been most interested in the Berggruen Institute (founded by Nicolas Berggruen) and its events, I’ve missed the arts and culture aspect of the Berggruen enterprise. Mark Westall’s March 15, 2022 article for FAD magazine gives some insight into Berggruen’s Venice arts and culture adventure,

In the most recent of his initiatives to encourage the work of today’s artists, deepen the connection between contemporary art and the past, and make art more widely accessible to the public, philanthropist Nicolas Berggruen today [March 15, 2022] announced the creation of Berggruen Arts & Culture and the acquisition of the historic Palazzo Diedo by the Nicolas Berggruen Charitable Trust in Venice’s Cannaregio district, which is being restored and renovated to serve as a base for this multi-faceted, international program and its activities in Venice and around the world.

At Palazzo Diedo, Berggruen Arts & Culture will host an array of exhibitions—some drawn from Nicolas Berggruen’s personal collection—as well as installations, symposia, and an artist-in-residence program that will foster the creation of art in Venice. To bring the palazzo to life during the renovation phase and make its new role visible to the public, Berggruen Arts & Culture has named Sterling Ruby as its inaugural artist-in-residence. Ruby will create A Project in Four Acts, a multi-year installation at Palazzo Diedo, with the first element debuting on April 20, 2022, and on view through the duration of the 59th Biennale Arte.

Internationally renowned contemporary art curator Mario Codognato, who has served as chief curator of MADRE in Naples and director of the Anish Kapoor Foundation in Venice [I have more on Anish Kapoor later], has been named the artistic director of Berggruen Arts & Culture. Venetian architect Silvio Fassi is overseeing the renovation of the palazzo, which will open officially in 2024, concurrent with the Biennale di Venezia.

Nicolas Berggruen’s initiatives in the visual arts and culture have spanned the traditional and the experimental. As a representative of a family that is legendary in the field of 20th-century European art, he has been instrumental in expanding the programming and curatorial autonomy of the Museum Berggruen, which has been a component of the Nationalgalerie in Berlin since 2000. As founder of the Berggruen Institute, he has spearheaded the expansion of the Institute with a presence in Los Angeles, Beijing, and Venice. He has supported Institute-led projects pairing leading contemporary artists including Anicka Yi, Ian Cheng, Rob Reynolds, Agnieszka Kurant, Pierre Huyghe, and Nancy Baker Cahill with researchers in artificial intelligence and biology, to create works exploring our changing ideas of what it means to be human.

Palazzo Diedo is the second historic building that the Nicolas Berggruen Charitable Trust has acquired in Venice, following the purchase of Casa dei Tre Oci on the Giudecca as the principal European base for the Berggruen Institute. In April and June 2022, Berggruen Arts & Culture will present a series of artist conversations in partnership with ArtReview at Casa dei Tre Oci. Berggruen Arts & Culture will also undertake activities such as exhibitions, discussions, lectures, and residencies at sites beyond Palazzo Diedo and Casa dei Tre Oci, such as Museum Berggruen in Berlin and the Berggruen Institute in Los Angeles.

For those of us not lucky enough to be in Venice for the opening of the 59th Biennale Arte, there’s this amusing story about Anish Kapoor and an artistic feud over the blackest black (a coating material made of carbon nanotubes) in my February 21, 2019 posting.

Art/sci and the Berggruen Institute

While the April 22, 2022 talk doesn’t directly address science issues vis-à-vis arts and culture, this upcoming Berggruen Institute/University of Southern California (USC) event does,

What Will Life Become?

Thursday, April 21 [2022] @ USC // Friday, April 22 [2022] @ Berggruen Institute // #WWLB

About

Biotechnologies that push the limits of life, artificial intelligences that can be trained to learn, and endeavors that envision life beyond Earth are among recent and anticipated technoscientific futures. Such projects unsettle theories and material realities of body, mind, species, and the planet. They prompt us to ask: How will we conjure positive human futures and future humans?

On Thursday, April 21 [2022] and Friday, April 22 [2022], the Berggruen Institute and the USC Dornsife Center on Science, Technology, and Public, together with philosophers, scientists, and artists, collaboratively and critically inquire:

What Will Life Become?

KEYNOTE CONVERSATION
“Speculative Worldbuilding”

PUBLIC FORUM
“What Will Life Become?”

PANELS
“Futures of Life”
“Futures of Mind”
“Futures in Outer Space”

WORKSHOP
“Embodied Futures”

VISION

The search for extraterrestrial biosignatures, human/machine cyborgian mashups, and dreams to facilitate reproduction beyond Earth are future-facing technologies. They complicate the purported thresholds, conditions, and boundaries of “the human,” “life,” and “the mind” — as if such categories have ever been stable. 

In concert with the Berggruen Institute’s newly launched Future Humans Program, What Will Life Become? invites philosophers, scientists, and artists to design and co-shape the human and more-than-human futures of life, the mind, and the planet.

Day 1 at USC Michelson Center for Convergent Bioscience 101 features a Keynote with director and speculative architect Liam Young who will discuss world-building through narrative and film with Nils Gilman; a Public Forum with leading scholars K Allado-McDowell, Neda Atanasoski, Lisa Ruth Rand, Tiffany Vora, moderated by Claire Isabel Webb, who will consider the question, “what will life become?” Reception to follow.

Day 2 at the Berggruen Institute features a three-part Salon: “Futures of Life,” “Futures of Mind,” and “Futures in Outer Space.” Conceptual artists Sougwen Chung*, Nancy Baker Cahill, REEPS100, Brian Cantrell, and ARSWAIN will unveil world premieres. “Embodied Futures” invites participants to imagine novel forms of life, mind, and being through artistic and intellectual provocations.

I have some details about how you can attend the programme in person or online,

DAY 1: USC

To participate in the Keynote Conversation and Public Forum on April 21, join us in person at USC Michelson Hall 101 or over YouTube beginning at 1:00 p.m [PT]. We’ll also send you the findings of the Workshop. Please register here.

DAY 2: BERGGRUEN INSTITUTE

This invite-only [mephasis mine] workshop at the Berggruen Institute Headquarters features a day of creating Embodied Futures. A three-panel salon, followed by the world premieres of art commissioned by the Institute, will provide provocations for the Possible Worlds exercises. Participants will imagine and design Future Relics and write letters to 2049. WWLB [What Will Life Become?] findings will be available online following the workshop.

*I will have more about Sougwen Chung and her work when I post my commentary on the exhibition running from March 5 – October 23, 2022 at the Vancouver Art Gallery, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence.”

Two approaches to memristors

Within one day of each other in October 2018, two different teams working on memristors with applications to neuroprosthetics and neuromorphic computing (brainlike computing) announced their results.

Russian team

An October 15, 2018 (?) Lobachevsky University press release (also published on October 15, 2018 on EurekAlert) describes a new approach to memristors,

Biological neurons are coupled unidirectionally through a special junction called a synapse. An electrical signal is transmitted along a neuron after some biochemical reactions initiate a chemical release to activate an adjacent neuron. These junctions are crucial for cognitive functions, such as perception, learning and memory.

A group of researchers from Lobachevsky University in Nizhny Novgorod investigates the dynamics of an individual memristive device when it receives a neuron-like signal as well as the dynamics of a network of analog electronic neurons connected by means of a memristive device. According to Svetlana Gerasimova, junior researcher at the Physics and Technology Research Institute and at the Neurotechnology Department of Lobachevsky University, this system simulates the interaction between synaptically coupled brain neurons while the memristive device imitates a neuron axon.

A memristive device is a physical model of Chua’s [Dr. Leon Chua, University of California at Berkeley; see my May 9, 2008 posting for a brief description Dr. Chua’s theory] memristor, which is an electric circuit element capable of changing its resistance depending on the electric signal received at the input. The device based on a Au/ZrO2(Y)/TiN/Ti structure demonstrates reproducible bipolar switching between the low and high resistance states. Resistive switching is determined by the oxidation and reduction of segments of conducting channels (filaments) in the oxide film when voltage with different polarity is applied to it. In the context of the present work, the ability of a memristive device to change conductivity under the action of pulsed signals makes it an almost ideal electronic analog of a synapse.

Lobachevsky University scientists and engineers supported by the Russian Science Foundation (project No.16-19-00144) have experimentally implemented and theoretically described the synaptic connection of neuron-like generators using the memristive interface and investigated the characteristics of this connection.

“Each neuron is implemented in the form of a pulse signal generator based on the FitzHugh-Nagumo model. This model provides a qualitative description of the main neurons’ characteristics: the presence of the excitation threshold, the presence of excitable and self-oscillatory regimes with the possibility of a changeover. At the initial time moment, the master generator is in the self-oscillatory mode, the slave generator is in the excitable mode, and the memristive device is used as a synapse. The signal from the master generator is conveyed to the input of the memristive device, the signal from the output of the memristive device is transmitted to the input of the slave generator via the loading resistance. When the memristive device switches from a high resistance to a low resistance state, the connection between the two neuron-like generators is established. The master generator goes into the oscillatory mode and the signals of the generators are synchronized. Different signal modulation mode synchronizations were demonstrated for the Au/ZrO2(Y)/TiN/Ti memristive device,” – says Svetlana Gerasimova.

UNN researchers believe that the next important stage in the development of neuromorphic systems based on memristive devices is to apply such systems in neuroprosthetics. Memristive systems will provide a highly efficient imitation of synaptic connection due to the stochastic nature of the memristive phenomenon and can be used to increase the flexibility of the connections for neuroprosthetic purposes. Lobachevsky University scientists have vast experience in the development of neurohybrid systems. In particular, a series of experiments was performed with the aim of connecting the FitzHugh-Nagumo oscillator with a biological object, a rat brain hippocampal slice. The signal from the electronic neuron generator was transmitted through the optic fiber communication channel to the bipolar electrode which stimulated Schaffer collaterals (axons of pyramidal neurons in the CA3 field) in the hippocampal slices. “We are going to combine our efforts in the design of artificial neuromorphic systems and our experience of working with living cells to improve flexibility of prosthetics,” concludes S. Gerasimova.

The results of this research were presented at the 38th International Conference on Nonlinear Dynamics (Dynamics Days Europe) at Loughborough University (Great Britain).

This diagram illustrates an aspect of the work,

Caption: Schematic of electronic neurons coupling via a memristive device. Credit: Lobachevsky University

US team

The American Institute of Physics (AIP) announced the publication of a ‘memristor paper’ by a team from the University of Southern California (USC) in an October 16, 2018 news item on phys.org,

Just like their biological counterparts, hardware that mimics the neural circuitry of the brain requires building blocks that can adjust how they synapse, with some connections strengthening at the expense of others. One such approach, called memristors, uses current resistance to store this information. New work looks to overcome reliability issues in these devices by scaling memristors to the atomic level.

An October 16, 2018 AIP news release (also on EurekAlert), which originated the news item, delves further into the particulars of this particular piece of memristor research,

A group of researchers demonstrated a new type of compound synapse that can achieve synaptic weight programming and conduct vector-matrix multiplication with significant advances over the current state of the art. Publishing its work in the Journal of Applied Physics, from AIP Publishing, the group’s compound synapse is constructed with atomically thin boron nitride memristors running in parallel to ensure efficiency and accuracy.

The article appears in a special topic section of the journal devoted to “New Physics and Materials for Neuromorphic Computation,” which highlights new developments in physical and materials science research that hold promise for developing the very large-scale, integrated “neuromorphic” systems of tomorrow that will carry computation beyond the limitations of current semiconductors today.

“There’s a lot of interest in using new types of materials for memristors,” said Ivan Sanchez Esqueda, an author on the paper. “What we’re showing is that filamentary devices can work well for neuromorphic computing applications, when constructed in new clever ways.”

Current memristor technology suffers from a wide variation in how signals are stored and read across devices, both for different types of memristors as well as different runs of the same memristor. To overcome this, the researchers ran several memristors in parallel. The combined output can achieve accuracies up to five times those of conventional devices, an advantage that compounds as devices become more complex.

The choice to go to the subnanometer level, Sanchez said, was born out of an interest to keep all of these parallel memristors energy-efficient. An array of the group’s memristors were found to be 10,000 times more energy-efficient than memristors currently available.

“It turns out if you start to increase the number of devices in parallel, you can see large benefits in accuracy while still conserving power,” Sanchez said. Sanchez said the team next looks to further showcase the potential of the compound synapses by demonstrating their use completing increasingly complex tasks, such as image and pattern recognition.

Here’s an image illustrating the parallel artificial synapses,

Caption: Hardware that mimics the neural circuitry of the brain requires building blocks that can adjust how they synapse. One such approach, called memristors, uses current resistance to store this information. New work looks to overcome reliability issues in these devices by scaling memristors to the atomic level. Researchers demonstrated a new type of compound synapse that can achieve synaptic weight programming and conduct vector-matrix multiplication with significant advances over the current state of the art. They discuss their work in this week’s Journal of Applied Physics. This image shows a conceptual schematic of the 3D implementation of compound synapses constructed with boron nitride oxide (BNOx) binary memristors, and the crossbar array with compound BNOx synapses for neuromorphic computing applications. Credit: Ivan Sanchez Esqueda

Here’s a link to and a citation for the paper,

Efficient learning and crossbar operations with atomically-thin 2-D material compound synapses by Ivan Sanchez Esqueda, Huan Zhao and Han Wang. The article will appear in the Journal of Applied Physics Oct. 16, 2018 (DOI: 10.1063/1.5042468).

This paper is behind a paywall.

*Title corrected from ‘Two approaches to memristors featuring’ to ‘Two approaches to memristors’ on May 31, 2019 at 1455 hours PDT.

Book commentaries: The Science of Orphan Black: The Official Companion and Star Trek Treknology; The Science of Star Trek from Tricorders to Warp Drive

I got more than I expected from both books (“The Science of Orphan Black: The Official Companion” by Casey Griffin and Nina Nesseth and “Star Trek Treknology; The Science of Star Trek from Tricorders to Warp Drive” by Ethan Siegel) I’m going to discuss by changing my expectations.

The Science of Orphan Black: The Official Companion

I had expected a book about the making of the series with a few insider stories about the production along with some science. Instead, I was treated to a season by season breakdown of the major scientific and related ethical issues in the fields of cloning and genetics.I don’t follow those areas exhaustively but from my inexpert perspective, the authors covered everything I could have hoped for (e.g., CRISPR/CAS9, Henrietta Lacks, etc.) in an accessible but demanding writing style  In other words, it’s a good read but it’s not a light read.

There are many, many pictures of Tatiana Maslany as one of her various clone identities in the book. Unfortunately, the images do not boast good reproduction values. This was disconcerting as it can lead a reader (yes, that was me) to false expectations (e.g., this is a picture book) concerning the contents. The boxed snippets from the scripts and explanatory notes inset into the text helped to break up some of the more heavy going material while providing additional historical/scripting/etc. perspectives. One small niggle, the script snippets weren’t always as relevant to the discussion at hand as the authors no doubt hoped.

I suggest reading both the Foreword by Cosima Herter, the series science consultant, and (although it could have done with a little editing) The Conversation between Cosima Herter and Graeme Manson (one of the producers). That’s where you’ll find that the series seems to have been incubated in Vancouver, Canada. It’s also where you’ll find out how much of Cosima Herter’s real life story is included in the Cosima clone’s life story.

The Introduction tells you how the authors met (as members of ‘the clone club’) and started working together as recappers for the series. (For anyone unfamiliar with the phenomenon or terminology, episodes of popular series are recapitulated [recapped] on one or more popular websites. These may or may not be commercial, i.e., some are fan sites.)

One of the authors, Casey Griffin, is a PhD candidate at the University of Southern California (USC) studying in the field of developmental and stem cell biology. I was not able to get much more information but did find her LinkedIn profile. The other author also has a science background. Nina Nesseth is described as a science communicator on the back cover of the book but she’s described as a staff scientist for Science North, a science centre located in Sudbury, Ontario, Canada. Her LinkedIn profile lists an honours Bachelor of Science (Biological and Medical Sciences) from Laurentian University, also located in Sudbury, Ontario.

It’s no surprise, given the authors’ educational background, that a bibliography (selected) has been included. This is something I very much appreciated. Oddly, given that Nesseth lists a graduate certificate in publishing as one of her credentials (on LinkedIn), there is no index (!?!). Unusually, the copyright page is at the back of the book instead of the front and boasts a fairly harsh copyright notice (summary: don’t copy anything, ever … unless you get written permission from ECW Press and the other copyright owners; Note: Herter is the copyright owner of her Foreword while the authors own the rest).

There are logos on the copyright page—more than I’m accustomed to seeing. Interestingly, two of them are government logos. It seems that taxpayers contributed to the publication of this book. The copyright notice seems a little facey to me since taxpayers (at least partially) subsidized the book, as well, Canadian copyright law has a concept called fair dealing (in the US, there’s something similar: fair use). In other words, if I chose, I could copy portions of the text without asking for permission if there’s no intent to profit from it and as long as I give attributions.

How, for example, could anyone profit from this?

In fact, in January 2017, Jun Wu and colleagues published their success in creating pig-human hybrids. (description of real research on chimeras on p. 98)

Or this snippet of dialogue,

[Charlotte] You’re my big sister.

[Sarah] How old are you? (p. 101)

All the quoted text is from “The Science of Orphan Black: The Official Companion” by Casey Griffin and Nina Nesseth (paperback published August 22, 2017).

On the subject of chimeras, the Canadian Broadcasting Corporation (CBC) featured a January 26, 2017 article about the pig-human chimeras on its website along with a video,

Getting back to the book, copyright silliness aside, it’s a good book for anyone interested in some of the  science and the issues associated with biotechnology, synthetic biology, genomes, gene editing technologies, chimeras, and more. I don’t think you need to have seen the series in order to appreciate the book.

Star Trek Treknology; The Science of Star Trek from Tricorders to Warp Drive

This looks and feels like a coffee table book. The images in this book are of a much higher quality than those in the ‘Orphan Black’ book. With thicker paper and extensive ink coverage lending to its glossy, attractive looks, it’s a physically heavy book. The unusually heavy use of black ink  would seem to be in service of conveying the feeling that you are exploring the far reaches of outer space.

It’s clear that “Star Trek Treknology; The Science of Star Trek from Tricorders to Warp Drive’s” author, Ethan Siegel, PhD., is a serious Star Trek and space travel fan. All of the series and movies are referenced at one time or another in the book in relationship to technology (treknology).

Unlike Siegel, while I love science fiction and Star Trek, I have never been personally interested in space travel. Regardless, Siegel did draw me in with his impressive ability to describe and explain physics-related ideas. Unfortunately, his final chapter on medical and biological ‘treknology’ is not as good. He covers a wide range of topics but no one is an expert on everything.

Siegel has a Wikipedia entry, which notes this (Note: Links have been removed),

Ethan R. Siegel (August 3, 1978, Bronx)[1] is an American theoretical astrophysicist and science writer, who studies Big Bang theory. He is a professor at Lewis & Clark College and he blogs at Starts With a Bang, on ScienceBlogs and also on Forbes.com since 2016.

By contrast with the ‘Orphan Black’ book, the tone is upbeat. It’s one of the reasons Siegel appreciates Star Trek in its various iterations,

As we look at the real-life science and technology behind the greatest advances anticipated by Star Trek, it’s worth remembering that the greatest legacy of the show is its message of hope. The future can be brighter and better than our past or present has ever been. It’s our continuing mission to make it so. (p. 6)

All the quoted text is from “Star Trek Treknology; The Science of Star Trek from Tricorders to Warp Drive” by Ethan Siegel (hard cover published October 15, 2017).

This book too has one of those copyright notices that fail to note you don’t need permission when it’s fair dealing to copy part of the text. While it does have an index, it’s on the anemic side and, damningly, there are neither bibliography nor reference notes of any sort. If Siegel hadn’t done such a good writing job, I might not have been so distressed.

For example, it’s frustrating for someone like me who’s been trying to get information on cortical/neural  implants and finds this heretofore unknown and intriguing tidbit in Siegel’s text,

In 2016, the very first successful cortical implant into a patient with ALS [amyotrophic lateral sclerosis] was completed, marking the very first fully implanted brain-computer interface in a human being. (p. 180)

Are we talking about the Australia team, which announced human clinical trials for their neural/cortical implant (my February 15, 2016 posting) or was it preliminary work by a team in Ohio (US) which later (?) announced a successful implant for a quadriplegic (also known as tetraplegic) patient who was then able to move hands and fingers (see my April 19, 2016 posting)? Or is it an entirely different team?

One other thing, I was a bit surprised to see no mention of quantum or neuromorphic computing in the chapter on computing. I don’t believe either was part of the Star Trek universe but they (neuromorphic and quantum computing) are important developments and Siegel makes a point, on at least a few occasions, of contrasting present day research with what was and wasn’t ‘predicted’ by Star Trek.

As for the ‘predictions’, there’s a longstanding interplay between storytellers and science and sometimes it can be a little hard to figure out which came first. I think Siegel might have emphasized that give and take a bit more.

Regardless of my nitpicking, Siegel is a good writer and managed to put an astonishing amount of ‘educational’ material into a lively and engaging book. That is not easy.

Final thoughts

I enjoyed both books and am very excited to see grounded science being presented along with the fictional stories of both universes (Star Trek and Orphan Black).

Yes, both books have their shortcomings (harsh copyright notices, no index, no bibliography, no reference notes, etc.) but in the main they offer adults who are sufficiently motivated a wealth of current scientific and technical information along with some elucidation of ethical issues.

Alberta adds a newish quantum nanotechnology research hub to the Canada’s quantum computing research scene

One of the winners in Canada’s 2017 federal budget announcement of the Pan-Canadian Artificial Intelligence Strategy was Edmonton, Alberta. It’s a fact which sometimes goes unnoticed while Canadians marvel at the wonderfulness found in Toronto and Montréal where it seems new initiatives and monies are being announced on a weekly basis (I exaggerate) for their AI (artificial intelligence) efforts.

Alberta’s quantum nanotechnology hub (graduate programme)

Intriguingly, it seems that Edmonton has higher aims than (an almost unnoticed) leadership in AI. Physicists at the University of Alberta have announced hopes to be just as successful as their AI brethren in a Nov. 27, 2017 article by Juris Graney for the Edmonton Journal,

Physicists at the University of Alberta [U of A] are hoping to emulate the success of their artificial intelligence studying counterparts in establishing the city and the province as the nucleus of quantum nanotechnology research in Canada and North America.

Google’s artificial intelligence research division DeepMind announced in July [2017] it had chosen Edmonton as its first international AI research lab, based on a long-running partnership with the U of A’s 10-person AI lab.

Retaining the brightest minds in the AI and machine-learning fields while enticing a global tech leader to Alberta was heralded as a coup for the province and the university.

It is something U of A physics professor John Davis believes the university’s new graduate program, Quanta, can help achieve in the world of quantum nanotechnology.

The field of quantum mechanics had long been a realm of theoretical science based on the theory that atomic and subatomic material like photons or electrons behave both as particles and waves.

“When you get right down to it, everything has both behaviours (particle and wave) and we can pick and choose certain scenarios which one of those properties we want to use,” he said.

But, Davis said, physicists and scientists are “now at the point where we understand quantum physics and are developing quantum technology to take to the marketplace.”

“Quantum computing used to be realm of science fiction, but now we’ve figured it out, it’s now a matter of engineering,” he said.

Quantum computing labs are being bought by large tech companies such as Google, IBM and Microsoft because they realize they are only a few years away from having this power, he said.

Those making the groundbreaking developments may want to commercialize their finds and take the technology to market and that is where Quanta comes in.

East vs. West—Again?

Ivan Semeniuk in his article, Quantum Supremacy, ignores any quantum research effort not located in either Waterloo, Ontario or metro Vancouver, British Columbia to describe a struggle between the East and the West (a standard Canadian trope). From Semeniuk’s Oct. 17, 2017 quantum article [link follows the excerpts] for the Globe and Mail’s October 2017 issue of the Report on Business (ROB),

 Lazaridis [Mike], of course, has experienced lost advantage first-hand. As co-founder and former co-CEO of Research in Motion (RIM, now called Blackberry), he made the smartphone an indispensable feature of the modern world, only to watch rivals such as Apple and Samsung wrest away Blackberry’s dominance. Now, at 56, he is engaged in a high-stakes race that will determine who will lead the next technology revolution. In the rolling heartland of southwestern Ontario, he is laying the foundation for what he envisions as a new Silicon Valley—a commercial hub based on the promise of quantum technology.

Semeniuk skips over the story of how Blackberry lost its advantage. I came onto that story late in the game when Blackberry was already in serious trouble due to a failure to recognize that the field they helped to create was moving in a new direction. If memory serves, they were trying to keep their technology wholly proprietary which meant that developers couldn’t easily create apps to extend the phone’s features. Blackberry also fought a legal battle in the US with a patent troll draining company resources and energy in proved to be a futile effort.

Since then Lazaridis has invested heavily in quantum research. He gave the University of Waterloo a serious chunk of money as they named their Quantum Nano Centre (QNC) after him and his wife, Ophelia (you can read all about it in my Sept. 25, 2012 posting about the then new centre). The best details for Lazaridis’ investments in Canada’s quantum technology are to be found on the Quantum Valley Investments, About QVI, History webpage,

History-bannerHistory has repeatedly demonstrated the power of research in physics to transform society.  As a student of history and a believer in the power of physics, Mike Lazaridis set out in 2000 to make real his bold vision to establish the Region of Waterloo as a world leading centre for physics research.  That is, a place where the best researchers in the world would come to do cutting-edge research and to collaborate with each other and in so doing, achieve transformative discoveries that would lead to the commercialization of breakthrough  technologies.

Establishing a World Class Centre in Quantum Research:

The first step in this regard was the establishment of the Perimeter Institute for Theoretical Physics.  Perimeter was established in 2000 as an independent theoretical physics research institute.  Mike started Perimeter with an initial pledge of $100 million (which at the time was approximately one third of his net worth).  Since that time, Mike and his family have donated a total of more than $170 million to the Perimeter Institute.  In addition to this unprecedented monetary support, Mike also devotes his time and influence to help lead and support the organization in everything from the raising of funds with government and private donors to helping to attract the top researchers from around the globe to it.  Mike’s efforts helped Perimeter achieve and grow its position as one of a handful of leading centres globally for theoretical research in fundamental physics.

Stephen HawkingPerimeter is located in a Governor-General award winning designed building in Waterloo.  Success in recruiting and resulting space requirements led to an expansion of the Perimeter facility.  A uniquely designed addition, which has been described as space-ship-like, was opened in 2011 as the Stephen Hawking Centre in recognition of one of the most famous physicists alive today who holds the position of Distinguished Visiting Research Chair at Perimeter and is a strong friend and supporter of the organization.

Recognizing the need for collaboration between theorists and experimentalists, in 2002, Mike applied his passion and his financial resources toward the establishment of The Institute for Quantum Computing at the University of Waterloo.  IQC was established as an experimental research institute focusing on quantum information.  Mike established IQC with an initial donation of $33.3 million.  Since that time, Mike and his family have donated a total of more than $120 million to the University of Waterloo for IQC and other related science initiatives.  As in the case of the Perimeter Institute, Mike devotes considerable time and influence to help lead and support IQC in fundraising and recruiting efforts.  Mike’s efforts have helped IQC become one of the top experimental physics research institutes in the world.

Quantum ComputingMike and Doug Fregin have been close friends since grade 5.  They are also co-founders of BlackBerry (formerly Research In Motion Limited).  Doug shares Mike’s passion for physics and supported Mike’s efforts at the Perimeter Institute with an initial gift of $10 million.  Since that time Doug has donated a total of $30 million to Perimeter Institute.  Separately, Doug helped establish the Waterloo Institute for Nanotechnology at the University of Waterloo with total gifts for $29 million.  As suggested by its name, WIN is devoted to research in the area of nanotechnology.  It has established as an area of primary focus the intersection of nanotechnology and quantum physics.

With a donation of $50 million from Mike which was matched by both the Government of Canada and the province of Ontario as well as a donation of $10 million from Doug, the University of Waterloo built the Mike & Ophelia Lazaridis Quantum-Nano Centre, a state of the art laboratory located on the main campus of the University of Waterloo that rivals the best facilities in the world.  QNC was opened in September 2012 and houses researchers from both IQC and WIN.

Leading the Establishment of Commercialization Culture for Quantum Technologies in Canada:

In the Research LabFor many years, theorists have been able to demonstrate the transformative powers of quantum mechanics on paper.  That said, converting these theories to experimentally demonstrable discoveries has, putting it mildly, been a challenge.  Many naysayers have suggested that achieving these discoveries was not possible and even the believers suggested that it could likely take decades to achieve these discoveries.  Recently, a buzz has been developing globally as experimentalists have been able to achieve demonstrable success with respect to Quantum Information based discoveries.  Local experimentalists are very much playing a leading role in this regard.  It is believed by many that breakthrough discoveries that will lead to commercialization opportunities may be achieved in the next few years and certainly within the next decade.

Recognizing the unique challenges for the commercialization of quantum technologies (including risk associated with uncertainty of success, complexity of the underlying science and high capital / equipment costs) Mike and Doug have chosen to once again lead by example.  The Quantum Valley Investment Fund will provide commercialization funding, expertise and support for researchers that develop breakthroughs in Quantum Information Science that can reasonably lead to new commercializable technologies and applications.  Their goal in establishing this Fund is to lead in the development of a commercialization infrastructure and culture for Quantum discoveries in Canada and thereby enable such discoveries to remain here.

Semeniuk goes on to set the stage for Waterloo/Lazaridis vs. Vancouver (from Semeniuk’s 2017 ROB article),

… as happened with Blackberry, the world is once again catching up. While Canada’s funding of quantum technology ranks among the top five in the world, the European Union, China, and the US are all accelerating their investments in the field. Tech giants such as Google [also known as Alphabet], Microsoft and IBM are ramping up programs to develop companies and other technologies based on quantum principles. Meanwhile, even as Lazaridis works to establish Waterloo as the country’s quantum hub, a Vancouver-area company has emerged to challenge that claim. The two camps—one methodically focused on the long game, the other keen to stake an early commercial lead—have sparked an East-West rivalry that many observers of the Canadian quantum scene are at a loss to explain.

Is it possible that some of the rivalry might be due to an influential individual who has invested heavily in a ‘quantum valley’ and has a history of trying to ‘own’ a technology?

Getting back to D-Wave Systems, the Vancouver company, I have written about them a number of times (particularly in 2015; for the full list: input D-Wave into the blog search engine). This June 26, 2015 posting includes a reference to an article in The Economist magazine about D-Wave’s commercial opportunities while the bulk of the posting is focused on a technical breakthrough.

Semeniuk offers an overview of the D-Wave Systems story,

D-Wave was born in 1999, the same year Lazaridis began to fund quantum science in Waterloo. From the start, D-Wave had a more immediate goal: to develop a new computer technology to bring to market. “We didn’t have money or facilities,” says Geordie Rose, a physics PhD who co0founded the company and served in various executive roles. …

The group soon concluded that the kind of machine most scientists were pursing based on so-called gate-model architecture was decades away from being realized—if ever. …

Instead, D-Wave pursued another idea, based on a principle dubbed “quantum annealing.” This approach seemed more likely to produce a working system, even if the application that would run on it were more limited. “The only thing we cared about was building the machine,” says Rose. “Nobody else was trying to solve the same problem.”

D-Wave debuted its first prototype at an event in California in February 2007 running it through a few basic problems such as solving a Sudoku puzzle and finding the optimal seating plan for a wedding reception. … “They just assumed we were hucksters,” says Hilton [Jeremy Hilton, D.Wave senior vice-president of systems]. Federico Spedalieri, a computer scientist at the University of Southern California’s [USC} Information Sciences Institute who has worked with D-Wave’s system, says the limited information the company provided about the machine’s operation provoked outright hostility. “I think that played against them a lot in the following years,” he says.

It seems Lazaridis is not the only one who likes to hold company information tightly.

Back to Semeniuk and D-Wave,

Today [October 2017], the Los Alamos National Laboratory owns a D-Wave machine, which costs about $15million. Others pay to access D-Wave systems remotely. This year , for example, Volkswagen fed data from thousands of Beijing taxis into a machine located in Burnaby [one of the municipalities that make up metro Vancouver] to study ways to optimize traffic flow.

But the application for which D-Wave has the hights hope is artificial intelligence. Any AI program hings on the on the “training” through which a computer acquires automated competence, and the 2000Q [a D-Wave computer] appears well suited to this task. …

Yet, for all the buzz D-Wave has generated, with several research teams outside Canada investigating its quantum annealing approach, the company has elicited little interest from the Waterloo hub. As a result, what might seem like a natural development—the Institute for Quantum Computing acquiring access to a D-Wave machine to explore and potentially improve its value—has not occurred. …

I am particularly interested in this comment as it concerns public funding (from Semeniuk’s article),

Vern Brownell, a former Goldman Sachs executive who became CEO of D-Wave in 2009, calls the lack of collaboration with Waterloo’s research community “ridiculous,” adding that his company’s efforts to establish closer ties have proven futile, “I’ll be blunt: I don’t think our relationship is good enough,” he says. Brownell also point out that, while  hundreds of millions in public funds have flowed into Waterloo’s ecosystem, little funding is available for  Canadian scientists wishing to make the most of D-Wave’s hardware—despite the fact that it remains unclear which core quantum technology will prove the most profitable.

There’s a lot more to Semeniuk’s article but this is the last excerpt,

The world isn’t waiting for Canada’s quantum rivals to forge a united front. Google, Microsoft, IBM, and Intel are racing to develop a gate-model quantum computer—the sector’s ultimate goal. (Google’s researchers have said they will unveil a significant development early next year.) With the U.K., Australia and Japan pouring money into quantum, Canada, an early leader, is under pressure to keep up. The federal government is currently developing  a strategy for supporting the country’s evolving quantum sector and, ultimately, getting a return on its approximately $1-billion investment over the past decade [emphasis mine].

I wonder where the “approximately $1-billion … ” figure came from. I ask because some years ago MP Peter Julian asked the government for information about how much Canadian federal money had been invested in nanotechnology. The government replied with sheets of paper (a pile approximately 2 inches high) that had funding disbursements from various ministries. Each ministry had its own method with different categories for listing disbursements and the titles for the research projects were not necessarily informative for anyone outside a narrow specialty. (Peter Julian’s assistant had kindly sent me a copy of the response they had received.) The bottom line is that it would have been close to impossible to determine the amount of federal funding devoted to nanotechnology using that data. So, where did the $1-billion figure come from?

In any event, it will be interesting to see how the Council of Canadian Academies assesses the ‘quantum’ situation in its more academically inclined, “The State of Science and Technology and Industrial Research and Development in Canada,” when it’s released later this year (2018).

Finally, you can find Semeniuk’s October 2017 article here but be aware it’s behind a paywall.

Whither we goest?

Despite any doubts one might have about Lazaridis’ approach to research and technology, his tremendous investment and support cannot be denied. Without him, Canada’s quantum research efforts would be substantially less significant. As for the ‘cowboys’ in Vancouver, it takes a certain temperament to found a start-up company and it seems the D-Wave folks have more in common with Lazaridis than they might like to admit. As for the Quanta graduate  programme, it’s early days yet and no one should ever count out Alberta.

Meanwhile, one can continue to hope that a more thoughtful approach to regional collaboration will be adopted so Canada can continue to blaze trails in the field of quantum research.

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.