Category Archives: robots

Sonifying proteins to make music and brand new proteins

Markus Buehler at the Massachusetts Institute of Technology (MIT) has been working with music and science for a number of years. My December 9, 2011 posting, Music, math, and spiderwebs, was the first one here featuring his work. My November 28, 2012 posting, Producing stronger silk musically, was a followup to Buehler’s previous work.

A June 28, 2019 news item on Azonano provides a recent update,

Composers string notes of different pitch and duration together to create music. Similarly, cells join amino acids with different characteristics together to make proteins.

Now, researchers have bridged these two seemingly disparate processes by translating protein sequences into musical compositions and then using artificial intelligence to convert the sounds into brand-new proteins. …

Caption: Researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature. Credit: Zhao Qin and Francisco Martin-Martinez

A June 26, 2019 American Chemical Society (ACS) news release, which originated the news item, provides more detail and a video,

To make proteins, cellular structures called ribosomes add one of 20 different amino acids to a growing chain in combinations specified by the genetic blueprint. The properties of the amino acids and the complex shapes into which the resulting proteins fold determine how the molecule will work in the body. To better understand a protein’s architecture, and possibly design new ones with desired features, Markus Buehler and colleagues wanted to find a way to translate a protein’s amino acid sequence into music.

The researchers transposed the unique natural vibrational frequencies of each amino acid into sound frequencies that humans can hear. In this way, they generated a scale consisting of 20 unique tones. Unlike musical notes, however, each amino acid tone consisted of the overlay of many different frequencies –– similar to a chord. Buehler and colleagues then translated several proteins into audio compositions, with the duration of each tone specified by the different 3D structures that make up the molecule. Finally, the researchers used artificial intelligence to recognize specific musical patterns that corresponded to certain protein architectures. The computer then generated scores and translated them into new-to-nature proteins. In addition to being a tool for protein design and for investigating disease mutations, the method could be helpful for explaining protein structure to broad audiences, the researchers say. They even developed an Android app [Amino Acid Synthesizer] to allow people to create their own bio-based musical compositions.

Here’s the ACS video,

A June 26, 2019 MIT news release (also on EurekAlert) provides some specifics and the MIT news release includes two embedded audio files,

Want to create a brand new type of protein that might have useful properties? No problem. Just hum a few bars.

In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature.

Although it’s not quite as simple as humming a new protein into existence, the new system comes close. It provides a systematic way of translating a protein’s sequence of amino acids into a musical sequence, using the physical properties of the molecules to determine the sounds. Although the sounds are transposed in order to bring them within the audible range for humans, the tones and their relationships are based on the actual vibrational frequencies of each amino acid molecule itself, computed using theories from quantum chemistry.

The system was developed by Markus Buehler, the McAfee Professor of Engineering and head of the Department of Civil and Environmental Engineering at MIT, along with postdoc Chi Hua Yu and two others. As described in the journal ACS Nano, the system translates the 20 types of amino acids, the building blocks that join together in chains to form all proteins, into a 20-tone scale. Any protein’s long sequence of amino acids then becomes a sequence of notes.

While such a scale sounds unfamiliar to people accustomed to Western musical traditions, listeners can readily recognize the relationships and differences after familiarizing themselves with the sounds. Buehler says that after listening to the resulting melodies, he is now able to distinguish certain amino acid sequences that correspond to proteins with specific structural functions. “That’s a beta sheet,” he might say, or “that’s an alpha helix.”

Learning the language of proteins

The whole concept, Buehler explains, is to get a better handle on understanding proteins and their vast array of variations. Proteins make up the structural material of skin, bone, and muscle, but are also enzymes, signaling chemicals, molecular switches, and a host of other functional materials that make up the machinery of all living things. But their structures, including the way they fold themselves into the shapes that often determine their functions, are exceedingly complicated. “They have their own language, and we don’t know how it works,” he says. “We don’t know what makes a silk protein a silk protein or what patterns reflect the functions found in an enzyme. We don’t know the code.”

By translating that language into a different form that humans are particularly well-attuned to, and that allows different aspects of the information to be encoded in different dimensions — pitch, volume, and duration — Buehler and his team hope to glean new insights into the relationships and differences between different families of proteins and their variations, and use this as a way of exploring the many possible tweaks and modifications of their structure and function. As with music, the structure of proteins is hierarchical, with different levels of structure at different scales of length or time.

The team then used an artificial intelligence system to study the catalog of melodies produced by a wide variety of different proteins. They had the AI system introduce slight changes in the musical sequence or create completely new sequences, and then translated the sounds back into proteins that correspond to the modified or newly designed versions. With this process they were able to create variations of existing proteins — for example of one found in spider silk, one of nature’s strongest materials — thus making new proteins unlike any produced by evolution.

Although the researchers themselves may not know the underlying rules, “the AI has learned the language of how proteins are designed,” and it can encode it to create variations of existing versions, or completely new protein designs, Buehler says. Given that there are “trillions and trillions” of potential combinations, he says, when it comes to creating new proteins “you wouldn’t be able to do it from scratch, but that’s what the AI can do.”

“Composing” new proteins

By using such a system, he says training the AI system with a set of data for a particular class of proteins might take a few days, but it can then produce a design for a new variant within microseconds. “No other method comes close,” he says. “The shortcoming is the model doesn’t tell us what’s really going on inside. We just know it works.

This way of encoding structure into music does reflect a deeper reality. “When you look at a molecule in a textbook, it’s static,” Buehler says. “But it’s not static at all. It’s moving and vibrating. Every bit of matter is a set of vibrations. And we can use this concept as a way of describing matter.”

The method does not yet allow for any kind of directed modifications — any changes in properties such as mechanical strength, elasticity, or chemical reactivity will be essentially random. “You still need to do the experiment,” he says. When a new protein variant is produced, “there’s no way to predict what it will do.”

The team also created musical compositions developed from the sounds of amino acids, which define this new 20-tone musical scale. The art pieces they constructed consist entirely of the sounds generated from amino acids. “There are no synthetic or natural instruments used, showing how this new source of sounds can be utilized as a creative platform,” Buehler says. Musical motifs derived from both naturally existing proteins and AI-generated proteins are used throughout the examples, and all the sounds, including some that resemble bass or snare drums, are also generated from the sounds of amino acids.

The researchers have created a free Android smartphone app, called Amino Acid Synthesizer, to play the sounds of amino acids and record protein sequences as musical compositions.

Here’s a link to and a citation for the paper,

A Self-Consistent Sonification Method to Translate Amino Acid Sequences into Musical Compositions and Application in Protein Design Using Artificial Intelligence by Chi-Hua Yu, Zhao Qin, Francisco J. Martin-Martinez, Markus J. Buehler. ACS Nano 2019 XXXXXXXXXX-XXX DOI: https://doi.org/10.1021/acsnano.9b02180 Publication Date:June 26, 2019 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

ETA October 23, 2019 1000 hours: Ooops! I almost forgot the link to the Aminot Acid Synthesizer.

October 2019 science and art/science events in Vancouver and other parts of Canada

This is a scattering of events, which I’m sure will be augmented as we properly start the month of October 2019.

October 2, 2019 in Waterloo, Canada (Perimeter Institute)

If you want to be close enough to press the sacred flesh (Sir Martin Rees), you’re out of luck. However, there are still options ranging from watching a live webcast from the comfort of your home to watching the lecture via closed circuit television with other devoted fans at a licensed bistro located on site at the Perimeter Institute (PI) to catching the lecture at a later date via YouTube.

That said, here’s why you might be interested,

Here’s more from a September 11, 2019 Perimeter Institute (PI) announcement received via email,

Surviving the Century
MOVING TOWARD A POST-HUMAN FUTURE
Martin Rees, UK Astronomer Royal
Wednesday, Oct. 2 at 7:00 PM ET

Advances in technology and space exploration could, if applied wisely, allow a bright future for the 10 billion people living on earth by the end of the century.

But there are dystopian risks we ignore at our peril: our collective “footprint” on our home planet, as well as the creation and use of technologies so powerful that even small groups could cause a global catastrophe.

Martin Rees, the UK Astronomer Royal, will explore this unprecedented moment in human history during his lecture on October 2, 2019. A former president of the Royal Society and master of Trinity College, Cambridge, Rees is a cosmologist whose work also explores the interfaces between science, ethics, and politics. Read More.

Mark your calendar! Tickets will be available on Monday, Sept. 16 at 9 AM ET

Didn’t get tickets for the lecture? We’ve got more ways to watch.
Join us at Perimeter on lecture night to watch live in the Black Hole Bistro.
Catch the live stream on Inside the Perimeter or watch it on Youtube the next day
Become a member of our donor thank you program! Learn more.

It took me a while to locate an address for PI venue since I expect that information to be part of the announcement. (insert cranky emoticon here) Here’s the address: Perimeter Institute, Mike Lazaridis Theatre of Ideas, 31 Caroline St. N., Waterloo, ON

Before moving onto the next event, I’m including a paragraph from the event description that was not included in the announcement (from the PI Outreach Surviving the Century webpage),

In his October 2 [2019] talk – which kicks off the 2019/20 season of the Perimeter Institute Public Lecture Series – Rees will discuss the outlook for humans (or their robotic envoys) venturing to other planets. Humans, Rees argues, will be ill-adapted to new habitats beyond Earth, and will use genetic and cyborg technology to transform into a “post-human” species.

I first covered Sir Martin Rees and his concerns about technology (robots and cyborgs run amok) in this November 26, 2012 posting about existential risk. He and his colleagues at Cambridge University, UK, proposed a Centre for the Study of Existential Risk, which opened in 2015.

Straddling Sept. and Oct. at the movies in Vancouver

The Vancouver International Film Festival (VIFF) opened today, September 26, 2019. During its run to October 11, 2019 there’ll be a number of documentaries that touch on science. Here are three of the documentaries most closely adhere to the topics I’m most likely to address on this blog. There is a fourth documentary included here as it touches on ecology in a more hopeful fashion than is the current trend.

Human Nature

From the VIFF 2019 film description and ticket page,

One of the most significant scientific breakthroughs in history, the discovery of CRISPR has made it possible to manipulate human DNA, paving the path to a future of great possibilities.

The implications of this could mean the eradication of disease or, more controversially, the possibility of genetically pre-programmed children.

Breaking away from scientific jargon, Human Nature pieces together a complex account of bio-research for the layperson as compelling as a work of science-fiction. But whether the gene-editing powers of CRISPR (described as “a word processor for DNA”) are used for good or evil, they’re reshaping the world as we know it. As we push past the boundaries of what it means to be human, Adam Bolt’s stunning work of science journalism reaches out to scientists, engineers, and people whose lives could benefit from CRISPR technology, and offers a wide-ranging look at the pros and cons of designing our futures.

Tickets
Friday, September 27, 2019 at 11:45 AM
Vancity Theatre

Saturday, September 28, 2019 at 11:15 AM
International Village 10

Thursday, October 10, 2019 at 6:45 PM
SFU Goldcorp

According to VIFF, the tickets for the Sept. 27, 2019 show are going fast.

Resistance Fighters

From the VIFF 2019 film description and ticket page,

Since mass-production in the 1940s, antibiotics have been nothing less than miraculous, saving countless lives and revolutionizing modern medicine. It’s virtually impossible to imagine hospitals or healthcare without them. But after years of abuse and mismanagement by the medical and agricultural communities, superbugs resistant to antibiotics are reaching apocalyptic proportions. The ongoing rise in multi-resistant bacteria – unvanquishable microbes, currently responsible for 700,000 deaths per year and projected to kill 10 million yearly by 2050 if nothing changes – and the people who fight them are the subjects of Michael Wech’s stunning “science-thriller.”

Peeling back the carefully constructed veneer of the medical corporate establishment’s greed and complacency to reveal the world on the cusp of a potential crisis, Resistance Fighters sounds a clarion call of urgency. It’s an all-out war, one which most of us never knew we were fighting, to avoid “Pharmageddon.” Doctors, researchers, patients, and diplomats testify about shortsighted medical and economic practices, while Wech offers refreshingly original perspectives on environment, ecology, and (animal) life in general. As alarming as it is informative, this is a wake-up call the world needs to hear.

Sunday, October 6, 2019 at 5:45 PM
International Village 8

Thursday, October 10, 2019 at 2:15 PM
SFU Goldcorp

According to VIFF, the tickets for the Oct. 6, 2019 show are going fast.

Trust Machine: The Story of Blockchain

Strictly speaking this is more of a technology story than science story but I have written about blockchain and cryptocurrencies before so I’m including this. From the VIFF 2019 film description and ticket page,

For anyone who has questions about cryptocurrencies like Bitcoin (and who doesn’t?), Alex Winter’s thorough documentary is an excellent introduction to the blockchain phenomenon. Trust Machine offers a wide range of expert testimony and a variety of perspectives that explicate the promises and the risks inherent in this new manifestation of high-tech wizardry. And it’s not just money that blockchains threaten to disrupt: innovators as diverse as UNICEF and Imogen Heap make spirited arguments that the industries of energy, music, humanitarianism, and more are headed for revolutionary change.

A propulsive and subversive overview of this little-understood phenomenon, Trust Machine crafts a powerful and accessible case that a technologically decentralized economy is more than just a fad. As the aforementioned experts – tech wizards, underground activists, and even some establishment figures – argue persuasively for an embrace of the possibilities offered by blockchains, others criticize its bubble-like markets and inefficiencies. Either way, Winter’s film suggests a whole new epoch may be just around the corner, whether the powers that be like it or not.

Tuesday, October 1, 2019 at 11:00 AM
Vancity Theatre

Thursday, October 3, 2019 at 9:00 PM
Vancity Theatre

Monday, October 7, 2019 at 1:15 PM
International Village 8

According to VIFF, tickets for all three shows are going fast

The Great Green Wall

For a little bit of hope, From the VIFF 2019 film description and ticket page,

“We must dare to invent the future.” In 2007, the African Union officially began a massively ambitious environmental project planned since the 1970s. Stretching through 11 countries and 8,000 km across the desertified Sahel region, on the southern edges of the Sahara, The Great Green Wall – once completed, a mosaic of restored, fertile land – would be the largest living structure on Earth.

Malian musician-activist Inna Modja embarks on an expedition through Senegal, Mali, Nigeria, Niger, and Ethiopia, gathering an ensemble of musicians and artists to celebrate the pan-African dream of realizing The Great Green Wall. Her journey is accompanied by a dazzling array of musical diversity, celebrating local cultures and traditions as they come together into a community to stand against the challenges of desertification, drought, migration, and violent conflict.

An unforgettable, beautiful exploration of a modern marvel of ecological restoration, and so much more than a passive source of information, The Great Green Wall is a powerful call to take action and help reshape the world.

Sunday, September 29, 2019 at 11:15 AM
International Village 10

Wednesday, October 2, 2019 at 6:00 PM
International Village 8
Standby – advance tickets are sold out but a limited number are likely to be released at the door

Wednesday, October 9, 2019 at 11:00 AM
International Village 9

As you can see, one show is already offering standby tickets only and the other two are selling quickly.

For venue locations, information about what ‘standby’ means and much more go here and click on the Festival tab. As for more information the individual films, you’ll links to trailers, running times, and more on the pages for which I’ve supplied links.

Brain Talks on October 16, 2019 in Vancouver

From time to time I get notices about a series titled Brain Talks from the Dept. of Psychiatry at the University of British Columbia. A September 11, 2019 announcement (received via email) focuses attention on the ‘guts of the matter’,

YOU ARE INVITED TO ATTEND:

BRAINTALKS: THE BRAIN AND THE GUT

WEDNESDAY, OCTOBER 16TH, 2019 FROM 6:00 PM – 8:00 PM

Join us on Wednesday October 16th [2019] for a series of talks exploring the
relationship between the brain, microbes, mental health, diet and the
gut. We are honored to host three phenomenal presenters for the evening:
Dr. Brett Finlay, Dr. Leslie Wicholas, and Thara Vayali, ND.

DR. BRETT FINLAY [2] is a Professor in the Michael Smith Laboratories at
the University of British Columbia. Dr. Finlay’s  research interests are
focused on host-microbe interactions at the molecular level,
specializing in Cellular Microbiology. He has published over 500 papers
and has been inducted into the Canadian  Medical Hall of Fame. He is the
co-author of the  books: Let Them Eat Dirt and The Whole Body
Microbiome.

DR. LESLIE WICHOLAS [3]  is a psychiatrist with an expertise in the
clinical understanding of the gut-brain axis. She has become
increasingly involved in the emerging field of Nutritional Psychiatry,
exploring connections between diet, nutrition, and mental health.
Currently, Dr. Wicholas is the director of the Food as Medicine program
at the Mood Disorder Association of BC.

THARA VAYALI, ND [4] holds a BSc in Nutritional Sciences and a MA in
Education and Communications. She has trained in naturopathic medicine
and advocates for awareness about women’s physiology and body literacy.
Ms. Vayali is a frequent speaker and columnist that prioritizes
engagement, understanding, and community as pivotal pillars for change.

Our event on Wednesday, October 16th [2019] will start with presentations from
each of the three speakers, and end with a panel discussion inspired by
audience questions. After the talks, at 7:30 pm, we host a social
gathering with a rich spread of catered healthy food and non-alcoholic
drinks. We look forward to seeing you there!

Paetzhold Theater

Vancouver General Hospital; Jim Pattison Pavilion, Vancouver, BC

Attend Event

That’s it for now.

Ghosts, mechanical turks, and pseudo-AI (artificial intelligence)—Is it all a con game?

There’s been more than one artificial intelligence (AI) story featured here on this blog but the ones featured in this posting are the first I’ve stumbled across that suggest the hype is even more exaggerated than even the most cynical might have thought. (BTW, the 2019 material is later as I have taken a chronological approach to this posting.)

It seems a lot of companies touting their AI algorithms and capabilities are relying on human beings to do the work, from a July 6, 2018 article by Olivia Solon for the Guardian (Note: A link has been removed),

It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.

“Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”.

“It’s essentially prototyping the AI with human beings,” he said.

In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them.

“I wonder if Expensify SmartScan users know MTurk workers enter their receipts,” said Rochelle LaPlante, a “Turker” and advocate for gig economy workers on Twitter. “I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.”

Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M.

In some cases, humans are used to train the AI system and improve its accuracy. …

The Turk

Fooling people with machines that seem intelligent is not new according to a Sept. 10, 2018 article by Seth Stevenson for Slate.com (Note: Links have been removed),

It’s 1783, and Paris is gripped by the prospect of a chess match. One of the contestants is François-André Philidor, who is considered the greatest chess player in Paris, and possibly the world. Everyone is so excited because Philidor is about to go head-to-head with the other biggest sensation in the chess world at the time.

But his opponent isn’t a man. And it’s not a woman, either. It’s a machine.

This story may sound a lot like Garry Kasparov taking on Deep Blue, IBM’s chess-playing supercomputer. But that was only a couple of decades ago, and this chess match in Paris happened more than 200 years ago. It doesn’t seem like a robot that can play chess would even be possible in the 1780s. This machine playing against Philidor was making an incredible technological leap—playing chess, and not only that, but beating humans at chess.

In the end, it didn’t quite beat Philidor, but the chess master called it one of his toughest matches ever. It was so hard for Philidor to get a read on his opponent, which was a carved wooden figure—slightly larger than life—wearing elaborate garments and offering a cold, mean stare.

It seems like the minds of the era would have been completely blown by a robot that could nearly beat a human chess champion. Some people back then worried that it was black magic, but many folks took the development in stride. …

Debates about the hottest topic in technology today—artificial intelligence—didn’t starts in the 1940s, with people like Alan Turing and the first computers. It turns out that the arguments about AI go back much further than you might imagine. The story of the 18th-century chess machine turns out to be one of those curious tales from history that can help us understand technology today, and where it might go tomorrow.

[In future episodes our podcast, Secret History of the Future] we’re going to look at the first cyberattack, which happened in the 1830s, and find out how the Victorians invented virtual reality.

Philidor’s opponent was known as The Turk or Mechanical Turk and that ‘machine’ was in fact a masterful hoax as The Turk held a hidden compartment from which a human being directed his moves.

People pretending to be AI agents

It seems that today’s AI has something in common with the 18th century Mechanical Turk, there are often humans lurking in the background making things work. From a Sept. 4, 2018 article by Janelle Shane for Slate.com (Note: Links have been removed),

Every day, people are paid to pretend to be bots.

In a strange twist on “robots are coming for my job,” some tech companies that boast about their artificial intelligence have found that at small scales, humans are a cheaper, easier, and more competent alternative to building an A.I. that can do the task.

Sometimes there is no A.I. at all. The “A.I.” is a mockup powered entirely by humans, in a “fake it till you make it” approach used to gauge investor interest or customer behavior. Other times, a real A.I. is combined with human employees ready to step in if the bot shows signs of struggling. These approaches are called “pseudo-A.I.” or sometimes, more optimistically, “hybrid A.I.”

Although some companies see the use of humans for “A.I.” tasks as a temporary bridge, others are embracing pseudo-A.I. as a customer service strategy that combines A.I. scalability with human competence. They’re advertising these as “hybrid A.I.” chatbots, and if they work as planned, you will never know if you were talking to a computer or a human. Every remote interaction could turn into a form of the Turing test. So how can you tell if you’re dealing with a bot pretending to be a human or a human pretending to be a bot?

One of the ways you can’t tell anymore is by looking for human imperfections like grammar mistakes or hesitations. In the past, chatbots had prewritten bits of dialogue that they could mix and match according to built-in rules. Bot speech was synonymous with precise formality. In early Turing tests, spelling mistakes were often a giveaway that the hidden speaker was a human. Today, however, many chatbots are powered by machine learning. Instead of using a programmer’s rules, these algorithms learn by example. And many training data sets come from services like Amazon’s Mechanical Turk, which lets programmers hire humans from around the world to generate examples of tasks like asking and answering questions. These data sets are usually full of casual speech, regionalisms, or other irregularities, so that’s what the algorithms learn. It’s not uncommon these days to get algorithmically generated image captions that read like text messages. And sometimes programmers deliberately add these things in, since most people don’t expect imperfections of an algorithm. In May, Google’s A.I. assistant made headlines for its ability to convincingly imitate the “ums” and “uhs” of a human speaker.

Limited computing power is the main reason that bots are usually good at just one thing at a time. Whenever programmers try to train machine learning algorithms to handle additional tasks, they usually get algorithms that can do many tasks rather badly. In other words, today’s algorithms are artificial narrow intelligence, or A.N.I., rather than artificial general intelligence, or A.G.I. For now, and for many years in the future, any algorithm or chatbot that claims A.G.I-level performance—the ability to deal sensibly with a wide range of topics—is likely to have humans behind the curtain.

Another bot giveaway is a very poor memory. …

Bringing AI to life: ghosts

Sidney Fussell’s April 15, 2019 article for The Atlantic provides more detail about the human/AI interface as found in some Amazon products such as Alexa ( a voice-control system),

… Alexa-enabled speakers can and do interpret speech, but Amazon relies on human guidance to make Alexa, well, more human—to help the software understand different accents, recognize celebrity names, and respond to more complex commands. This is true of many artificial intelligence–enabled products. They’re prototypes. They can only approximate their promised functions while humans help with what Harvard researchers have called “the paradox of automation’s last mile.” Advancements in AI, the researchers write, create temporary jobs such as tagging images or annotating clips, even as the technology is meant to supplant human labor. In the case of the Echo, gig workers are paid to improve its voice-recognition software—but then, when it’s advanced enough, it will be used to replace the hostess in a hotel lobby.

A 2016 paper by researchers at Stanford University used a computer vision system to infer, with 88 percent accuracy, the political affiliation of 22 million people based on what car they drive and where they live. Traditional polling would require a full staff, a hefty budget, and months of work. The system completed the task in two weeks. But first, it had to know what a car was. The researchers paid workers through Amazon’s Mechanical Turk [emphasis mine] platform to manually tag thousands of images of cars, so the system would learn to differentiate between shapes, styles, and colors.

It may be a rude awakening for Amazon Echo owners, but AI systems require enormous amounts of categorized data, before, during, and after product launch. ..,

Isn’t interesting that Amazon also has a crowdsourcing marketplace for its own products. Calling it ‘Mechanical Turk’ after a famous 18th century hoax would suggest a dark sense of humour somewhere in the corporation. (You can find out more about the Amazon Mechanical Turk on this Amazon website and in its Wikipedia entry.0

Anthropologist, Mary L. Gray has coined the phrase ‘ghost work’ for the work that humans perform but for which AI gets the credit. Angela Chan’s May 13, 2019 article for The Verge features Gray as she promotes her latest book with Siddarth Suri ‘Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass’ (Note: A link has been removed),

“Ghost work” is anthropologist Mary L. Gray’s term for the invisible labor that powers our technology platforms. When Gray, a senior researcher at Microsoft Research, first arrived at the company, she learned that building artificial intelligence requires people to manage and clean up data to feed to the training algorithms. “I basically started asking the engineers and computer scientists around me, ‘Who are the people you pay to do this task work of labeling images and classification tasks and cleaning up databases?’” says Gray. Some people said they didn’t know. Others said they didn’t want to know and were concerned that if they looked too closely they might find unsavory working conditions.

So Gray decided to find out for herself. Who are the people, often invisible, who pick up the tasks necessary for these platforms to run? Why do they do this work, and why do they leave? What are their working conditions?

The interview that follows is interesting although it doesn’t seem to me that the question about working conditions is answered in any great detail. However, there is this rather interesting policy suggestion,

If companies want to happily use contract work because they need to constantly churn through new ideas and new aptitudes, the only way to make that a good thing for both sides of that enterprise is for people to be able to jump into that pool. And people do that when they have health care and other provisions. This is the business case for universal health care, for universal education as a public good. It’s going to benefit all enterprise.

I want to get across to people that, in a lot of ways, we’re describing work conditions. We’re not describing a particular type of work. We’re describing today’s conditions for project-based task-driven work. This can happen to everybody’s jobs, and I hate that that might be the motivation because we should have cared all along, as this has been happening to plenty of people. For me, the message of this book is: let’s make this not just manageable, but sustainable and enjoyable. Stop making our lives wrap around work, and start making work serve our lives.

Puts a different spin on AI and work, doesn’t it?

AI (artificial intelligence) and a hummingbird robot

Every once in a while I stumble across a hummingbird robot story (my August 12, 2011 posting and my August 1, 2014 posting). Here’s what the hummingbird robot looks like now (hint: there’s a significant reduction in size),

Caption: Purdue University researchers are building robotic hummingbirds that learn from computer simulations how to fly like a real hummingbird does. The robot is encased in a decorative shell. Credit: Purdue University photo/Jared Pike

I think this is the first time I’ve seen one of these projects not being funded by the military, which explains why the researchers are more interested in using these hummingbird robots for observing wildlife and for rescue efforts in emergency situations. Still, they do acknowledge theses robots could also be used in covert operations.

From a May 9, 2019 news item on ScienceDaily,

What can fly like a bird and hover like an insect?

Your friendly neighborhood hummingbirds. If drones had this combo, they would be able to maneuver better through collapsed buildings and other cluttered spaces to find trapped victims.

Purdue University researchers have engineered flying robots that behave like hummingbirds, trained by machine learning algorithms based on various techniques the bird uses naturally every day.

This means that after learning from a simulation, the robot “knows” how to move around on its own like a hummingbird would, such as discerning when to perform an escape maneuver.

Artificial intelligence, combined with flexible flapping wings, also allows the robot to teach itself new tricks. Even though the robot can’t see yet, for example, it senses by touching surfaces. Each touch alters an electrical current, which the researchers realized they could track.

“The robot can essentially create a map without seeing its surroundings. This could be helpful in a situation when the robot might be searching for victims in a dark place — and it means one less sensor to add when we do give the robot the ability to see,” said Xinyan Deng, an associate professor of mechanical engineering at Purdue.

The researchers even have a video,

A May 9, 2019 Purdue University news release (also on EurekAlert), which originated the news item, provides more detail,


The researchers [presented] their work on May 20 at the 2019 IEEE International Conference on Robotics and Automation in Montreal. A YouTube video is available at https://www.youtube.com/watch?v=hl892dHqfA&feature=youtu.be. [it’s the video I’ve embedded in the above]

Drones can’t be made infinitely smaller, due to the way conventional aerodynamics work. They wouldn’t be able to generate enough lift to support their weight.

But hummingbirds don’t use conventional aerodynamics – and their wings are resilient. “The physics is simply different; the aerodynamics is inherently unsteady, with high angles of attack and high lift. This makes it possible for smaller, flying animals to exist, and also possible for us to scale down flapping wing robots,” Deng said.

Researchers have been trying for years to decode hummingbird flight so that robots can fly where larger aircraft can’t. In 2011, the company AeroVironment, commissioned by DARPA, an agency within the U.S. Department of Defense, built a robotic hummingbird that was heavier than a real one but not as fast, with helicopter-like flight controls and limited maneuverability. It required a human to be behind a remote control at all times.

Deng’s group and her collaborators studied hummingbirds themselves for multiple summers in Montana. They documented key hummingbird maneuvers, such as making a rapid 180-degree turn, and translated them to computer algorithms that the robot could learn from when hooked up to a simulation.

Further study on the physics of insects and hummingbirds allowed Purdue researchers to build robots smaller than hummingbirds – and even as small as insects – without compromising the way they fly. The smaller the size, the greater the wing flapping frequency, and the more efficiently they fly, Deng says.

The robots have 3D-printed bodies, wings made of carbon fiber and laser-cut membranes. The researchers have built one hummingbird robot weighing 12 grams – the weight of the average adult Magnificent Hummingbird – and another insect-sized robot weighing 1 gram. The hummingbird robot can lift more than its own weight, up to 27 grams.

Designing their robots with higher lift gives the researchers more wiggle room to eventually add a battery and sensing technology, such as a camera or GPS. Currently, the robot needs to be tethered to an energy source while it flies – but that won’t be for much longer, the researchers say.

The robots could fly silently just as a real hummingbird does, making them more ideal for covert operations. And they stay steady through turbulence, which the researchers demonstrated by testing the dynamically scaled wings in an oil tank.

The robot requires only two motors and can control each wing independently of the other, which is how flying animals perform highly agile maneuvers in nature.

“An actual hummingbird has multiple groups of muscles to do power and steering strokes, but a robot should be as light as possible, so that you have maximum performance on minimal weight,” Deng said.

Robotic hummingbirds wouldn’t only help with search-and-rescue missions, but also allow biologists to more reliably study hummingbirds in their natural environment through the senses of a realistic robot.

“We learned from biology to build the robot, and now biological discoveries can happen with extra help from robots,” Deng said.
Simulations of the technology are available open-source at https://github.com/
purdue-biorobotics/flappy
.

Early stages of the work, including the Montana hummingbird experiments in collaboration with Bret Tobalske’s group at the University of Montana, were financially supported by the National Science Foundation.

The researchers have three paper on arxiv.org for open access peer review,

Learning Extreme Hummingbird Maneuvers on Flapping Wing Robots
Fan Fei, Zhan Tu, Jian Zhang, and Xinyan Deng
Purdue University, West Lafayette, IN, USA
https://arxiv.org/abs/1902.0962

Biological studies show that hummingbirds can perform extreme aerobatic maneuvers during fast escape. Given a sudden looming visual stimulus at hover, a hummingbird initiates a fast backward translation coupled with a 180-degree yaw turn, which is followed by instant posture stabilization in just under 10 wingbeats. Consider the wingbeat frequency of 40Hz, this aggressive maneuver is carried out in just 0.2 seconds. Inspired by the hummingbirds’ near-maximal performance during such extreme maneuvers, we developed a flight control strategy and experimentally demonstrated that such maneuverability can be achieved by an at-scale 12- gram hummingbird robot equipped with just two actuators. The proposed hybrid control policy combines model-based nonlinear control with model-free reinforcement learning. We use model-based nonlinear control for nominal flight control, as the dynamic model is relatively accurate for these conditions. However, during extreme maneuver, the modeling error becomes unmanageable. A model-free reinforcement learning policy trained in simulation was optimized to ‘destabilize’ the system and maximize the performance during maneuvering. The hybrid policy manifests a maneuver that is close to that observed in hummingbirds. Direct simulation-to-real transfer is achieved, demonstrating the hummingbird-like fast evasive maneuvers on the at-scale hummingbird robot.

Acting is Seeing: Navigating Tight Space Using Flapping Wings
Zhan Tu, Fan Fei, Jian Zhang, and Xinyan Deng
Purdue University, West Lafayette, IN, USA
https://arxiv.org/abs/1902.0868

Wings of flying animals can not only generate lift and control torques but also can sense their surroundings. Such dual functions of sensing and actuation coupled in one element are particularly useful for small sized bio-inspired robotic flyers, whose weight, size, and power are under stringent constraint. In this work, we present the first flapping-wing robot using its flapping wings for environmental perception and navigation in tight space, without the need for any visual feedback. As the test platform, we introduce the Purdue Hummingbird, a flapping-wing robot with 17cm wingspan and 12 grams weight, with a pair of 30-40Hz flapping wings driven by only two actuators. By interpreting the wing loading feedback and its variations, the vehicle can detect the presence of environmental changes such as grounds, walls, stairs, obstacles and wind gust. The instantaneous wing loading can be obtained through the measurements and interpretation of the current feedback by the motors that actuate the wings. The effectiveness of the proposed approach is experimentally demonstrated on several challenging flight tasks without vision: terrain following, wall following and going through a narrow corridor. To ensure flight stability, a robust controller was designed for handling unforeseen disturbances during the flight. Sensing and navigating one’s environment through actuator loading is a promising method for mobile robots, and it can serve as an alternative or complementary method to visual perception.

Flappy Hummingbird: An Open Source Dynamic Simulation of Flapping Wing Robots and Animals
Fan Fei, Zhan Tu, Yilun Yang, Jian Zhang, and Xinyan Deng
Purdue University, West Lafayette, IN, USA
https://arxiv.org/abs/1902.0962

Insects and hummingbirds exhibit extraordinary flight capabilities and can simultaneously master seemingly conflicting goals: stable hovering and aggressive maneuvering, unmatched by small scale man-made vehicles. Flapping Wing Micro Air Vehicles (FWMAVs) hold great promise for closing this performance gap. However, design and control of such systems remain challenging due to various constraints. Here, we present an open source high fidelity dynamic simulation for FWMAVs to serve as a testbed for the design, optimization and flight control of FWMAVs. For simulation validation, we recreated the hummingbird-scale robot developed in our lab in the simulation. System identification was performed to obtain the model parameters. The force generation, open- loop and closed-loop dynamic response between simulated and experimental flights were compared and validated. The unsteady aerodynamics and the highly nonlinear flight dynamics present challenging control problems for conventional and learning control algorithms such as Reinforcement Learning. The interface of the simulation is fully compatible with OpenAI Gym environment. As a benchmark study, we present a linear controller for hovering stabilization and a Deep Reinforcement Learning control policy for goal-directed maneuvering. Finally, we demonstrate direct simulation-to-real transfer of both control policies onto the physical robot, further demonstrating the fidelity of the simulation.

Enjoy!

Electronics begone! Enter: the light-based brainlike computing chip

At this point, it’s possible I’m wrong but I think this is the first ‘memristor’ type device (also called a neuromorphic chip) based on light rather than electronics that I’ve featured here on this blog. In other words, it’s not, technically speaking, a memristor but it does have the same properties so it is a neuromorphic chip.

Caption: The optical microchips that the researchers are working on developing are about the size of a one-cent piece. Credit: WWU Muenster – Peter Leßmann

A May 8, 2019 news item on Nanowerk announces this new approach to neuromorphic hardware (Note: A link has been removed),

Researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain.

The scientists produced a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses. The network is able to “learn” information and use this as a basis for computing and recognizing patterns. As the system functions solely with light and not with electrons, it can process data many times faster than traditional systems. …

A May 8, 2019 University of Münster press release (also on EurekAlert), which originated the news item, reveals the full story,

A technology that functions like a brain? In these times of artificial intelligence, this no longer seems so far-fetched – for example, when a mobile phone can recognise faces or languages. With more complex applications, however, computers still quickly come up against their own limitations. One of the reasons for this is that a computer traditionally has separate memory and processor units – the consequence of which is that all data have to be sent back and forth between the two. In this respect, the human brain is way ahead of even the most modern computers because it processes and stores information in the same place – in the synapses, or connections between neurons, of which there are a million-billion in the brain. An international team of researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have now succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain. The scientists managed to produce a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses.

The researchers were able to demonstrate, that such an optical neurosynaptic network is able to “learn” information and use this as a basis for computing and recognizing patterns – just as a brain can. As the system functions solely with light and not with traditional electrons, it can process data many times faster. “This integrated photonic system is an experimental milestone,” says Prof. Wolfram Pernice from Münster University and lead partner in the study. “The approach could be used later in many different fields for evaluating patterns in large quantities of data, for example in medical diagnoses.” The study is published in the latest issue of the “Nature” journal.

The story in detail – background and method used

Most of the existing approaches relating to so-called neuromorphic networks are based on electronics, whereas optical systems – in which photons, i.e. light particles, are used – are still in their infancy. The principle which the German and British scientists have now presented works as follows: optical waveguides that can transmit light and can be fabricated into optical microchips are integrated with so-called phase-change materials – which are already found today on storage media such as re-writable DVDs. These phase-change materials are characterised by the fact that they change their optical properties dramatically, depending on whether they are crystalline – when their atoms arrange themselves in a regular fashion – or amorphous – when their atoms organise themselves in an irregular fashion. This phase-change can be triggered by light if a laser heats the material up. “Because the material reacts so strongly, and changes its properties dramatically, it is highly suitable for imitating synapses and the transfer of impulses between two neurons,” says lead author Johannes Feldmann, who carried out many of the experiments as part of his PhD thesis at the Münster University.

In their study, the scientists succeeded for the first time in merging many nanostructured phase-change materials into one neurosynaptic network. The researchers developed a chip with four artificial neurons and a total of 60 synapses. The structure of the chip – consisting of different layers – was based on the so-called wavelength division multiplex technology, which is a process in which light is transmitted on different channels within the optical nanocircuit.

In order to test the extent to which the system is able to recognise patterns, the researchers “fed” it with information in the form of light pulses, using two different algorithms of machine learning. In this process, an artificial system “learns” from examples and can, ultimately, generalise them. In the case of the two algorithms used – both in so-called supervised and in unsupervised learning – the artificial network was ultimately able, on the basis of given light patterns, to recognise a pattern being sought – one of which was four consecutive letters.

“Our system has enabled us to take an important step towards creating computer hardware which behaves similarly to neurons and synapses in the brain and which is also able to work on real-world tasks,” says Wolfram Pernice. “By working with photons instead of electrons we can exploit to the full the known potential of optical technologies – not only in order to transfer data, as has been the case so far, but also in order to process and store them in one place,” adds co-author Prof. Harish Bhaskaran from the University of Oxford.

A very specific example is that with the aid of such hardware cancer cells could be identified automatically. Further work will need to be done, however, before such applications become reality. The researchers need to increase the number of artificial neurons and synapses and increase the depth of neural networks. This can be done, for example, with optical chips manufactured using silicon technology. “This step is to be taken in the EU joint project ‘Fun-COMP’ by using foundry processing for the production of nanochips,” says co-author and leader of the Fun-COMP project, Prof. C. David Wright from the University of Exeter.

Here’s a link to and a citation for the paper,

All-optical spiking neurosynaptic networks with self-learning capabilities by J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran & W. H. P. Pernice. Nature volume 569, pages208–214 (2019) DOI: https://doi.org/10.1038/s41586-019-1157-8 Issue Date: 09 May 2019

This paper is behind a paywall.

For the curious, I found a little more information about Fun-COMP (functionally-scaled computer technology). It’s a European Commission (EC) Horizon 2020 project coordinated through the University of Exeter. For information with details such as the total cost, contribution from the EC, the list of partnerships and more there is the Fun-COMP webpage on fabiodisconzi.com.

Automated science writing?

It seems that automated science writing is not ready—yet. Still, an April 18, 2019 news item on ScienceDaily suggests that progress is being made,

The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand.

Now, a team of scientists at MIT [Massachusetts Institute of Technology] and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two.

An April 17, 2019 MIT news release, which originated the news item, delves into the research and its implications,

Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists [emphasis mine] scan a large number of papers to get a preliminary sense of what they’re about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition.

The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a principal scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.

From AI for physics to natural language

The work came about as a result of an unrelated project, which involved developing new artificial intelligence approaches based on neural networks, aimed at tackling certain thorny problems in physics. However, the researchers soon realized that the same approach could be used to address other difficult computational problems, including natural language processing, in ways that might outperform existing neural network systems.

“We have been doing various kinds of work in AI for a few years now,” Soljačić says. “We use AI to help with our research, basically to do physics better. And as we got to be  more familiar with AI, we would notice that every once in a while there is an opportunity to add to the field of AI because of something that we know from physics — a certain mathematical construct or a certain law in physics. We noticed that hey, if we use that, it could actually help with this or that particular AI algorithm.”

This approach could be useful in a variety of specific kinds of tasks, he says, but not all. “We can’t say this is useful for all of AI, but there are instances where we can use an insight from physics to improve on a given AI algorithm.”

Neural networks in general are an attempt to mimic the way humans learn certain new things: The computer examines many different examples and “learns” what the key underlying patterns are. Such systems are widely used for pattern recognition, such as learning to identify objects depicted in photos.

But neural networks in general have difficulty correlating information from a long string of data, such as is required in interpreting a research paper. Various tricks have been used to improve this capability, including techniques known as long short-term memory (LSTM) and gated recurrent units (GRU), but these still fall well short of what’s needed for real natural-language processing, the researchers say.

The team came up with an alternative system, which instead of being based on the multiplication of matrices, as most conventional neural networks are, is based on vectors rotating in a multidimensional space. The key concept is something they call a rotational unit of memory (RUM).

Essentially, the system represents each word in the text by a vector in multidimensional space — a line of a certain length pointing in a particular direction. Each subsequent word swings this vector in some direction, represented in a theoretical space that can ultimately have thousands of dimensions. At the end of the process, the final vector or set of vectors is translated back into its corresponding string of words.

“RUM helps neural networks to do two things very well,” Nakov says. “It helps them to remember better, and it enables them to recall information more accurately.”

After developing the RUM system to help with certain tough physics problems such as the behavior of light in complex engineered materials, “we realized one of the places where we thought this approach could be useful would be natural language processing,” says Soljačić,  recalling a conversation with Tatalović, who noted that such a tool would be useful for his work as an editor trying to decide which papers to write about. Tatalović was at the time exploring AI in science journalism as his Knight fellowship project.

“And so we tried a few natural language processing tasks on it,” Soljačić says. “One that we tried was summarizing articles, and that seems to be working quite well.”

The proof is in the reading

As an example, they fed the same research paper through a conventional LSTM-based neural network and through their RUM-based system. The resulting summaries were dramatically different.

The LSTM system yielded this highly repetitive and fairly technical summary: “Baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat.

Based on the same paper, the RUM system produced a much more readable summary, and one that did not include the needless repetition of phrases: Urban raccoons may infect people more than previously assumed. 7 percent of surveyed individuals tested positive for raccoon roundworm antibodies. Over 90 percent of raccoons in Santa Barbara play host to this parasite.

Already, the RUM-based system has been expanded so it can “read” through entire research papers, not just the abstracts, to produce a summary of their contents. The researchers have even tried using the system on their own research paper describing these findings — the paper that this news story is attempting to summarize.

Here is the new neural network’s summary: Researchers have developed a new representation process on the rotational unit of RUM, a recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.

It may not be elegant prose, but it does at least hit the key points of information.

Çağlar Gülçehre, a research scientist at the British AI company Deepmind Technologies, who was not involved in this work, says this research tackles an important problem in neural networks, having to do with relating pieces of information that are widely separated in time or space. “This problem has been a very fundamental issue in AI due to the necessity to do reasoning over long time-delays in sequence-prediction tasks,” he says. “Although I do not think this paper completely solves this problem, it shows promising results on the long-term dependency tasks such as question-answering, text summarization, and associative recall.”

Gülçehre adds, “Since the experiments conducted and model proposed in this paper are released as open-source on Github, as a result many researchers will be interested in trying it on their own tasks. … To be more specific, potentially the approach proposed in this paper can have very high impact on the fields of natural language processing and reinforcement learning, where the long-term dependencies are very crucial.”

The research received support from the Army Research Office, the National Science Foundation, the MIT-SenseTime Alliance on Artificial Intelligence, and the Semiconductor Research Corporation. The team also had help from the Science Daily website, whose articles were used in training some of the AI models in this research.

As usual, this ‘automated writing system’ is framed as a ‘helper’ not an usurper of anyone’s job. However, its potential for changing the nature of the work is there. About five years ago I featured another ‘automated writing’ story in a July 16, 2014 posting titled: ‘Writing and AI or is a robot writing this blog?’ You may have been reading ‘automated’ news stories for years. At the time, the focus was on sports and business.

Getting back to 2019 and science writing, here’s a link to and a citation for the paper,

Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications by Rumen Dangovski, Li Jing, Preslav Nakov, Mićo Tatalović and Marin Soljačić. Transactions of the Association for Computational Linguistics Volume 07, 2019 pp.121-138 DOI: https://doi.org/10.1162/tacl_a_00258 Posted Online 2019

© 2019 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.

This paper is open access.

September 2019’s science’ish’ events in Toronto and Vancouver (Canada)

There are movies, plays, a multimedia installation experience all in Vancouver, and the ‘CHAOSMOSIS mAchInesexhibition/performance/discussion/panel/in-situ experiments/art/ science/ techne/ philosophy’ event in Toronto. But first, there’s a a Vancouver talk about engaging scientists in the upcoming federal election. .

Science in the Age of Misinformation (and the upcoming federal election) in Vancouver

Dr. Katie Gibbs, co-founder and executive director of Evidence for Democracy, will be giving a talk today (Sept. 4, 2019) at the University of British Columbia (UBC; Vancouver). From the Eventbrite webpage for Science in the Age of Misinformation,

Science in the Age of Misinformation, with Katie Gibbs, Evidence for Democracy
In the lead up to the federal election, it is more important than ever to understand the role that researchers play in shaping policy. Join us in this special Policy in Practice event with Dr. Katie Gibbs, Executive Director of Evidence for Democracy, Canada’s leading, national, non-partisan, and not-for-profit organization promoting science and the transparent use of evidence in government decision making. A Musqueam land acknowledgement, welcome remarks and moderation of this event will be provided by MPPGA students Joshua Tafel, and Chengkun Lv.

Wednesday, September 4, 2019
12:30 pm – 1:50 pm (Doors will open at noon)
Liu Institute for Global Issues – xʷθəθiqətəm (Place of Many Trees), 1st floor
Pizza will be provided starting at noon on first come, first serve basis. Please RSVP.

What role do researchers play in a political environment that is increasingly polarized and influenced by misinformation? Dr. Katie Gibbs, Executive Director of Evidence for Democracy, will give an overview of the current state of science integrity and science policy in Canada highlighting progress made over the past four years and what this means in a context of growing anti-expert movements in Canada and around the world. Dr. Gibbs will share concrete ways for researchers to engage heading into a critical federal election [emphasis mine], and how they can have lasting policy impact.

Bio: Katie Gibbs is a scientist, organizer and advocate for science and evidence-based policies. While completing her Ph.D. at the University of Ottawa in Biology, she was one of the lead organizers of the ‘Death of Evidence’—one of the largest science rallies in Canadian history. Katie co-founded Evidence for Democracy, Canada’s leading, national, non-partisan, and not-for-profit organization promoting science and the transparent use of evidence in government decision making. Her ongoing success in advocating for the restoration of public science in Canada has made Katie a go-to resource for national and international media outlets including Science, The Guardian and the Globe and Mail.

Katie has also been involved in international efforts to increase evidence-based decision-making and advises science integrity movements in other countries and is a member of the Open Government Partnership Multi-stakeholder Forum.

Disclaimer: Please note that by registering via Eventbrite, your information will be stored on the Eventbrite server, which is located outside Canada. If you do not wish to use this service, please email Joelle.Lee@ubc.ca directly to register. Thank you.

Location
Liu Institute for Global Issues – Place of Many Trees
6476 NW Marine Drive
Vancouver, British Columbia V6T 1Z2

Sadly I was not able to post the information about Dr. Gibbs’s more informal talk last night (Sept. 3, 2019) which was a special event with Café Scientifique but I do have a link to a website encouraging anyone who wants to help get science on the 2019 federal election agenda, Vote Science. P.S. I’m sorry I wasn’t able to post this in a more timely fashion.

Transmissions; a multimedia installation in Vancouver, September 6 -28, 2019

Here’s a description for the multimedia installation, Transmissions, in the August 28, 2019 Georgia Straight article by Janet Smith,

Lisa Jackson is a filmmaker, but she’s never allowed that job description to limit what she creates or where and how she screens her works.

The Anishinaabe artist’s breakout piece was last year’s haunting virtual-reality animation Biidaaban: First Light. In its eerie world, one that won a Canadian Screen Award, nature has overtaken a near-empty, future Toronto, with trees growing through cracks in the sidewalks, vines enveloping skyscrapers, and people commuting by canoe.

All that and more has brought her here, to Transmissions, a 6,000-square-foot, immersive film installation that invites visitors to wander through windy coastal forests, by hauntingly empty glass towers, into soundscapes of ancient languages, and more.

Through the labyrinthine multimedia work at SFU [Simon Fraser University] Woodward’s, Jackson asks big questions—about Earth’s future, about humanity’s relationship to it, and about time and Indigeneity.

Simultaneously, she mashes up not just disciplines like film and sculpture, but concepts of science, storytelling, and linguistics [emphasis mine].

“The tag lines I’m working with now are ‘the roots of meaning’ and ‘knitting the world together’,” she explains. “In western society, we tend to hive things off into ‘That’s culture. That’s science.’ But from an Indigenous point of view, it’s all connected.”

Transmissions is split into three parts, with what Jackson describes as a beginning, a middle, and an end. Like Biidaaban, it’s also visually stunning: the artist admits she’s playing with Hollywood spectacle.

Without giving too much away—a big part of the appeal of Jackson’s work is the sense of surprise—Vancouver audiences will first enter a 48-foot-long, six-foot-wide tunnel, surrounded by projections that morph from empty urban streets to a forest and a river. Further engulfing them is a soundscape that features strong winds, while black mirrors along the floor skew perspective and play with what’s above and below ground.

“You feel out of time and space,” says Jackson, who wants to challenge western society’s linear notions of minutes and hours. “I want the audience to have a physical response and an emotional response. To me, that gets closer to the Indigenous understanding. Because the Eurocentric way is more rational, where the intellectual is put ahead of everything else.”

Viewers then enter a room, where the highly collaborative Jackson has worked with artist Alan Storey, who’s helped create Plexiglas towers that look like the ghost high-rises of an abandoned city. (Storey has also designed other components of the installation.) As audience members wander through them on foot, projections make their shadows dance on the structures. Like Biidaaban, the section hints at a postapocalyptic or posthuman world. Jackson operates in an emerging realm of Indigenous futurism.

The words “science, storytelling, and linguistics” were emphasized due to a minor problem I have with terminology. Linguistics is defined as the scientific study of language combining elements from the natural sciences, social sciences, and the humanities. I wish either Jackson or Smith had discussed the scientific element of Transmissions at more length and perhaps reconnected linguistics to science along with the physics of time and space, as well as, storytelling, film, and sculpture. It would have been helpful since it’s my understanding, Transmissions is designed to showcase all of those connections and more in ways that may not be obvious to everyone. On the plus side, perhaps the tour, which is part of this installation experience includes that information.

I have a bit .more detail (including logistics for the tours) from the SFU Events webpage for Transmissions,

Transmissions
September 6 – September 28, 2019

The Roots of Meaning
World Premiere
September 6 – 28, 2019

Fei & Milton Wong Experimental Theatre
SFU Woodward’s, 149 West Hastings
Tuesday to Friday, 1pm to 7pm
Saturday and Sunday, 1pm to 5pm
FREE

In partnership with SFU Woodward’s Cultural Programs and produced by Electric Company Theatre and Violator Films.

TRANSMISSIONS is a three-part, 6000 square foot multimedia installation by award-winning Anishinaabe filmmaker and artist Lisa Jackson. It extends her investigation into the connections between land, language, and people, most recently with her virtual reality work Biidaaban: First Light.

Projections, sculpture, and film combine to create urban and natural landscapes that are eerie and beautiful, familiar and foreign, concrete and magical. Past and future collide in a visceral and thought-provoking journey that questions our current moment and opens up the complexity of thought systems embedded in Indigenous languages. Radically different from European languages, they embody sets of relationships to the land, to each other, and to time itself.

Transmissions invites us to untether from our day-to-day world and imagine a possible future. It provides a platform to activate and cross-pollinate knowledge systems, from science to storytelling, ecology to linguistics, art to commerce. To begin conversations, to listen deeply, to engage varied perspectives and expertise, to knit the world together and find our place within the circle of all our relations.

Produced in association with McMaster University Socrates Project, Moving Images Distribution and Cobalt Connects Creativity.

….

Admission:  Free Public Tours
Tuesday through Sunday
Reservations accepted from 1pm to 3pm.  Reservations are booked in 15 minute increments.  Individuals and groups up to 10 welcome.
Please email: sfuw@sfu.ca for more information or to book groups of 10 or more.

Her Story: Canadian Women Scientists (short film subjects); Sept. 13 – 14, 2019

Curiosity Collider, producer of art/science events in Vancouver, is presenting a film series featuring Canadian women scientists, according to an August 27 ,2019 press release (received via email),

Her Story: Canadian Women Scientists,” a film series dedicated to sharing the stories of Canadian women scientists, will premiere on September 13th and 14th at the Annex theatre. Four pairs of local filmmakers and Canadian women scientists collaborated to create 5-6 minute videos; for each film in the series, a scientist tells her own story, interwoven with the story of an inspiring Canadian women scientist who came before her in her field of study.

Produced by Vancouver-based non-profit organization Curiosity Collider, this project was developed to address the lack of storytelling videos showcasing remarkable women scientists and their work available via popular online platforms. “Her Story reveals the lives of women working in science,” said Larissa Blokhuis, curator for Her Story. “This project acts as a beacon to girls and women who want to see themselves in the scientific community. The intergenerational nature of the project highlights the fact that women have always worked in and contributed to science.

This sentiment was reflected by Samantha Baglot as well, a PhD student in neuroscience who collaborated with filmmaker/science cartoonist Armin Mortazavi in Her Story. “It is empowering to share stories of previous Canadian female scientists… it is empowering for myself as a current female scientist to learn about other stories of success, and gain perspective of how these women fought through various hardships and inequality.”

When asked why seeing better representation of women in scientific work is important, artist/filmmaker Michael Markowsky shared his thoughts. “It’s important for women — and their male allies — to question and push back against these perceived social norms, and to occupy space which rightfully belongs to them.” In fact, his wife just gave birth to their first child, a daughter; “It’s personally very important to me that she has strong female role models to look up to.” His film will feature collaborating scientist Jade Shiller, and Kathleen Conlan – who was named one of Canada’s greatest explorers by Canadian Geographic in 2015.

Other participating filmmakers and collaborating scientists include: Leslie Kennah (Filmmaker), Kimberly Girling (scientist, Research and Policy Director at Evidence for Democracy), Lucas Kavanagh and Jesse Lupini (Filmmakers, Avocado Video), and Jessica Pilarczyk (SFU Assistant Professor, Department of Earth Sciences).

This film series is supported by Westcoast Women in Engineering, Science and Technology (WWEST) and Eng.Cite. The venue for the events is provided by Vancouver Civic Theatres.

Event Information

Screening events will be hosted at Annex (823 Seymour St, Vancouver) on September 13th and 14th [2019]. Events will also include a talkback with filmmakers and collab scientists on the 13th, and a panel discussion on representations of women in science and culture on the 14th. Visit http://bit.ly/HerStoryTickets2019 for tickets ($14.99-19.99) and http://bit.ly/HerStoryWomenScientists for project information.

I have a film collage,

Courtesy: Curiosity Collider

I looks like they’re presenting films with a diversity of styles. You can find out more about Curiosity Collider and its various programmes and events here.

Vancouver Fringe Festival September 5 – 16, 2019

I found two plays in this year’s fringe festival programme that feature science in one way or another. Not having seen either play I make no guarantees as to content. First up is,

AI Love You
Exit Productions
London, UK
Playwright: Melanie Anne Ball
exitproductionsltd.com

Adam and April are a regular 20-something couple, very nearly blissfully generic, aside from one important detail: one of the pair is an “artificially intelligent companion.” Their joyful veneer has begun to crack and they need YOU to decide the future of their relationship. Is the freedom of a robot or the will of a human more important?
For AI Love You: 

***** “Magnificent, complex and beautifully addictive.” —Spy in the Stalls 
**** “Emotionally charged, deeply moving piece … I was left with goosebumps.” —West End Wilma 
**** —London City Nights 
Past shows: 
***** “The perfect show.” —Theatre Box

Intellectual / Intimate / Shocking / 14+ / 75 minutes

The first show is on Friday, September 6, 2019 at 5 pm. There are another five showings being presented. You can get tickets and more information here.

The second play is this,

Red Glimmer
Dusty Foot Productions
Vancouver, Canada
Written & Directed by Patricia Trinh

Abstract Sci-Fi dramedy. An interdimensional science experiment! Woman involuntarily takes an all inclusive internal trip after falling into a deep depression. A scientist is hired to navigate her neurological pathways from inside her mind – tackling the fact that humans cannot physically re-experience somatosensory sensation, like pain. What if that were the case for traumatic emotional pain? A creepy little girl is heard running by. What happens next?

Weird / Poetic / Intellectual / LGBTQ+ / Multicultural / 14+ / Sexual Content / 50 minutes

This show is created by an underrepresented Artist.
Written, directed, and produced by local theatre Artist Patricia Trinh, a Queer, Asian-Canadian female.

The first showing is tonight, September 5, 2019 at 8:30 pm. There are another six showings being presented. You can get tickets and more information here.

CHAOSMOSIS mAchInes exhibition/performance/discussion/panel/in-situ experiments/art/ science/ techne/ philosophy, 28 September, 2019 in Toronto

An Art/Sci Salon September 2, 2019 announcement (received via email), Note: I have made some formatting changes,

CHAOSMOSIS mAchInes

28 September, 2019 
7pm-11pm.
Helen-Gardiner-Phelan Theatre, 2nd floor
University of Toronto. 79 St. George St.

A playful co-presentation by the Topological Media Lab (Concordia U-Montreal) and The Digital Dramaturgy Labsquared (U of T-Toronto). This event is part of our collaboration with DDLsquared lab, the Topological Lab and the Leonardo LASER network


7pm-9.30pm, Installation-performances, 
9.30pm-11pm, Reception and cash bar, Front and Long Room, Ground floor


Description:
From responsive sculptures to atmosphere-creating machines; from sensorial machines to affective autonomous robots, Chaosmosis mAchInes is an eclectic series of installations and performances reflecting on today’s complex symbiotic relations between humans, machines and the environment.


This will be the first encounter between Montreal-based Topological Media Lab (Concordia University) and the Toronto-based Digital Dramaturgy Labsquared (U of T) to co-present current process-based and experimental works. Both labs have a history of notorious playfulness, conceptual abysmal depth, human-machine interplays, Art&Science speculations (what if?), collaborative messes, and a knack for A/I as in Artistic Intelligence.


Thanks to  Nina Czegledy (Laser series, Leonardo network) for inspiring the event and for initiating the collaboration


Visit our Facebook event page 
Register through Evenbrite


Supported by


Main sponsor: Centre for Drama, Theatre and Performance Studies, U of T
Sponsors: Computational Arts Program (York U.), Cognitive Science Program (U of T), Knowledge Media Design Institute (U of T), Institute for the History and Philosophy of Science and Technology (IHPST)Fonds de Recherche du Québec – Société et culture (FRQSC)The Centre for Comparative Literature (U of T)
A collaboration between
Laser events, Leonardo networks – Science Artist, Nina Czegledy
ArtsSci Salon – Artistic Director, Roberta Buiani
Digital Dramaturgy Labsquared – Creative Research Director, Antje Budde
Topological Media Lab – Artistic-Research Co-directors, Michael Montanaro | Navid Navab


Project presentations will include:
Topological Media Lab
tangibleFlux φ plenumorphic ∴ chaosmosis
SPIEL
On Air
The Sound That Severs Now from Now
Cloud Chamber (2018) | Caustic Scenography, Responsive Cloud Formation
Liquid Light
Robots: Machine Menagerie
Phaze
Phase
Passing Light
Info projects
Digital Dramaturgy Labsquared
Btw Lf & Dth – interFACING disappearance
Info project

This is a very active September.

ETA September 4, 2019 at 1607 hours PDT: That last comment is even truer than I knew when I published earlier. I missed a Vancouver event, Maker Faire Vancouver will be hosted at Science World on Saturday, September 14. Here’s a little more about it from a Sept. 3, 2019 at Science World at Telus Science World blog posting,

Earlier last month [August 2019?], surgeons at St Paul’s Hospital performed an ankle replacement for a Cloverdale resident using a 3D printed bone. The first procedure of its kind in Western Canada, it saved the patient all of his ten toes — something doctors had originally decided to amputate due to the severity of the motorcycle accident.

Maker Faire Vancouver Co-producer, John Biehler, may not be using his 3D printer for medical breakthroughs, but he does see a subtle connection between his home 3D printer and the Health Canada-approved bone.

“I got into 3D printing to make fun stuff and gadgets,” John says of the box-sized machine that started as a hobby and turned into a side business. “But the fact that the very same technology can have life-changing and life-saving applications is amazing.”

When John showed up to Maker Faire Vancouver seven years ago, opportunities to access this hobby were limited. Armed with a 3D printer he had just finished assembling the night before, John was hoping to meet others in the community with similar interests to build, experiment and create. Much like the increase in accessibility to these portable machines has changed over the years—with universities, libraries and makerspaces making them readily available alongside CNC Machines, laser cutters and more — John says the excitement around crafting and tinkering has skyrocketed as well.

“The kind of technology that inspires people to print a bone or spinal insert all starts at ground zero in places like a Maker Faire where people get exposed to STEAM,” John says …

… From 3D printing enthusiasts like John to knitters, metal artists and roboticists, this full one-day event [Maker Faire Vancouver on Saturday, September 14, 2019] will facilitate cross-pollination between hobbyists, small businesses, artists and tinkerers. Described as part science fair, part county fair and part something entirely new, Maker Faire Vancouver hopes to facilitate discovery and what John calls “pure joy moments.”

Hopefully that’s it.

Biohybrid cyborgs

Cyborgs are usually thought of as people who’ve been enhanced with some sort of technology, In contemporary real life that technology might be a pacemaker or hip replacement but in science fiction it’s technology such as artificial retinas (for example) that expands the range of visible light for an enhanced human.

Rarely does the topic of a microscopic life form come up in discussion about cyborgs and yet, that’s exactly what an April 3, 2019 Nanowerk spotlight article by Michael Berger describes in relationship to its use in water remediation efforts (Note: links have been removed),

Researchers often use living systems as inspiration for the design and engineering of micro- and nanoscale propulsion systems, actuators, sensors, and robots. …

“Although microrobots have recently proved successful for remediating decontaminated water at the laboratory scale, the major challenge in the field is to scale up these applications to actual environmental settings,” Professor Joseph Wang, Chair of Nanoengineering and Director, Center of Wearable Sensors at the University California San Diego, tells Nanowerk. “In order to do this, we need to overcome the toxicity of their chemical fuels, the short time span of biocompatible magnesium-based micromotors and the small domain operation of externally actuated microrobots.”

In their recent work on self-propelled biohybrid microrobots, Wang and his team were inspired by recent developments of biohybrid cyborgs that integrate self-propelling bacteria with functionalized synthetic nanostructures to transport materials.

“These tiny cyborgs are incredibly efficient for transport materials, but the limitation that we observed is that they do not provide large-scale fluid mixing,” notes Wang. ” We wanted to combine the best properties of both worlds. So, we searched for the best candidate to create a more robust biohybrid for mixing and we decided on using rotifers (Brachionus) as the engine of the cyborg.”

These marine microorganisms, which measure between 100 and 300 micrometers, are amazing creatures as they already possess sensing ability, energetic autonomy, and provide large-scale fluid mixing capability. They are also are very resilient and can survive in very harsh environments and even are one of the few organisms that have survived via asexual reproduction.

“Taking inspiration from the science fiction concept of a cybernetic organism, or cyborg – where an organism has enhanced abilities due to the integration of some artificial component – we developed a self-propelled biohybrid microrobot, that we named rotibot, employing rotifers as their engine,” says Fernando Soto, first author of a paper on this work (Advanced Functional Materials, “Rotibot: Use of Rotifers as Self-Propelling Biohybrid Microcleaners”).

This is the first demonstration of a biohybrid cyborg used for the removal and degradation of pollutants from solution. The technical breakthrough that allowed the team to achieve this task is based on a novel fabrication mechanism based on the selective accumulation of functionalized microbeads in the microorganism’s mouth: The rotifer serves not only as a transport vessel for active material or cargo but also acting as a powerful biological pump, as it creates fluid flows directed towards its mouth

Nanowerk has made this video demonstrating a rotifer available along with a description,

“The rotibot is a rotifer (a marine microorganism) that has plastic microbeads attached to the mouth, which are functionalized with pollutant-degrading enzymes. This video illustrates a free swimming rotibot mixing tracer particles in solution. “

Here’s a link to and a citation for the paper,

Rotibot: Use of Rotifers as Self‐Propelling Biohybrid Microcleaners by Fernando Soto, Miguel Angel Lopez‐Ramirez, Itthipon Jeerapan, Berta Esteban‐Fernandez de Avila, Rupesh, Kumar Mishra, Xiaolong Lu, Ingrid Chai, Chuanrui Chen, Daniel Kupor. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.201900658 First published: 28 March 2019

This paper is behind a paywall.

Berger’s April 3, 2019 Nanowerk spotlight article includes some useful images if you are interested in figuring out how these rotibots function.

AI (artificial intelligence) artist got a show at a New York City art gallery

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

It has also, Bogost notes in his article, occasioned an art show (Note: Links have been removed),

… part of “Faceless Portraits Transcending Time,” an exhibition of prints recently shown [Februay 13 – March 5, 2019] at the HG Contemporary gallery in Chelsea, the epicenter of New York’s contemporary-art world. All of them were created by a computer.

The catalog calls the show a “collaboration between an artificial intelligence named AICAN and its creator, Dr. Ahmed Elgammal,” a move meant to spotlight, and anthropomorphize, the machine-learning algorithm that did most of the work. According to HG Contemporary, it’s the first solo gallery exhibit devoted to an AI artist.

If they hadn’t found each other in the New York art scene, the players involved could have met on a Spike Jonze film set: a computer scientist commanding five-figure print sales from software that generates inkjet-printed images; a former hotel-chain financial analyst turned Chelsea techno-gallerist with apparent ties to fine-arts nobility; a venture capitalist with two doctoral degrees in biomedical informatics; and an art consultant who put the whole thing together, A-Team–style, after a chance encounter at a blockchain conference. Together, they hope to reinvent visual art, or at least to cash in on machine-learning hype along the way.

The show in New York City, “Faceless Portraits …,” exhibited work by an artificially intelligent artist-agent (I’m creating a new term to suit my purposes) that’s different than the one used by Obvious to create “Portrait of Edmond de Belamy,” As noted earlier, it sold for a lot of money (Note: Links have been removed),

Bystanders in and out of the art world were shocked. The print had never been shown in galleries or exhibitions before coming to market at auction, a channel usually reserved for established work. The winning bid was made anonymously by telephone, raising some eyebrows; art auctions can invite price manipulation. It was created by a computer program that generates new images based on patterns in a body of existing work, whose features the AI “learns.” What’s more, the artists who trained and generated the work, the French collective Obvious, hadn’t even written the algorithm or the training set. They just downloaded them, made some tweaks, and sent the results to market.

“We are the people who decided to do this,” the Obvious member Pierre Fautrel said in response to the criticism, “who decided to print it on canvas, sign it as a mathematical formula, put it in a gold frame.” A century after Marcel Duchamp made a urinal into art [emphasis mine] by putting it in a gallery, not much has changed, with or without computers. As Andy Warhol famously said, “Art is what you can get away with.”

A bit of a segue here, there is a controversy as to whether or not that ‘urinal art’, also known as, The Fountain, should be attributed to Duchamp as noted in my January 23, 2019 posting titled ‘Baroness Elsa von Freytag-Loringhoven, Marcel Duchamp, and the Fountain’.

Getting back to the main action, Bogost goes on to describe the technologies underlying the two different AI artist-agents (Note: Links have been removed),

… Using a computer is hardly enough anymore; today’s machines offer all kinds of ways to generate images that can be output, framed, displayed, and sold—from digital photography to artificial intelligence. Recently, the fashionable choice has become generative adversarial networks, or GANs, the technology that created Portrait of Edmond de Belamy. Like other machine-learning methods, GANs use a sample set—in this case, art, or at least images of it—to deduce patterns, and then they use that knowledge to create new pieces. A typical Renaissance portrait, for example, might be composed as a bust or three-quarter view of a subject. The computer may have no idea what a bust is, but if it sees enough of them, it might learn the pattern and try to replicate it in an image.

GANs use two neural nets (a way of processing information modeled after the human brain) to produce images: a “generator” and a “discerner.” The generator produces new outputs—images, in the case of visual art—and the discerner tests them against the training set to make sure they comply with whatever patterns the computer has gleaned from that data. The quality or usefulness of the results depends largely on having a well-trained system, which is difficult.

That’s why folks in the know were upset by the Edmond de Belamy auction. The image was created by an algorithm the artists didn’t write, trained on an “Old Masters” image set they also didn’t create. The art world is no stranger to trend and bluster driving attention, but the brave new world of AI painting appeared to be just more found art, the machine-learning equivalent of a urinal on a plinth.

Ahmed Elgammal thinks AI art can be much more than that. A Rutgers University professor of computer science, Elgammal runs an art-and-artificial-intelligence lab, where he and his colleagues develop technologies that try to understand and generate new “art” (the scare quotes are Elgammal’s) with AI—not just credible copies of existing work, like GANs do. “That’s not art, that’s just repainting,” Elgammal says of GAN-made images. “It’s what a bad artist would do.”

Elgammal calls his approach a “creative adversarial network,” or CAN. It swaps a GAN’s discerner—the part that ensures similarity—for one that introduces novelty instead. The system amounts to a theory of how art evolves: through small alterations to a known style that produce a new one. That’s a convenient take, given that any machine-learning technique has to base its work on a specific training set.

The results are striking and strange, although calling them a new artistic style might be a stretch. They’re more like credible takes on visual abstraction. The images in the show, which were produced based on training sets of Renaissance portraits and skulls, are more figurative, and fairly disturbing. Their gallery placards name them dukes, earls, queens, and the like, although they depict no actual people—instead, human-like figures, their features smeared and contorted yet still legible as portraiture. Faceless Portrait of a Merchant, for example, depicts a torso that might also read as the front legs and rear haunches of a hound. Atop it, a fleshy orb comes across as a head. The whole scene is rippled by the machine-learning algorithm, in the way of so many computer-generated artworks.

Faceless Portrait of a Merchant, one of the AI portraits produced by Ahmed Elgammal and AICAN. (Artrendex Inc.) [downloaded from https://www.theatlantic.com/technology/archive/2019/03/ai-created-art-invades-chelsea-gallery-scene/584134/]

Bogost consults an expert on portraiture for a discussion about the particularities of portraiture and the shortcomings one might expect of an AI artist-agent (Note: A link has been removed),

“You can’t really pick a form of painting that’s more charged with cultural meaning than portraiture,” John Sharp, an art historian trained in 15th-century Italian painting and the director of the M.F.A. program in design and technology at Parsons School of Design, told me. The portrait isn’t just a style, it’s also a host for symbolism. “For example, men might be shown with an open book to show how they are in dialogue with that material; or a writing implement, to suggest authority; or a weapon, to evince power.” Take Portrait of a Youth Holding an Arrow, an early-16th-century Boltraffio portrait that helped train the AICAN database for the show. The painting depicts a young man, believed to be the Bolognese poet Girolamo Casio, holding an arrow at an angle in his fingers and across his chest. It doubles as both weapon and quill, a potent symbol of poetry and aristocracy alike. Along with the arrow, the laurels in Casio’s hair are emblems of Apollo, the god of both poetry and archery.

A neural net couldn’t infer anything about the particular symbolic trappings of the Renaissance or antiquity—unless it was taught to, and that wouldn’t happen just by showing it lots of portraits. For Sharp and other critics of computer-generated art, the result betrays an unforgivable ignorance about the supposed influence of the source material.

But for the purposes of the show, the appeal to the Renaissance might be mostly a foil, a way to yoke a hip, new technology to traditional painting in order to imbue it with the gravity of history: not only a Chelsea gallery show, but also an homage to the portraiture found at the Met. To reinforce a connection to the cradle of European art, some of the images are presented in elaborate frames, a decision the gallerist, Philippe Hoerle-Guggenheim (yes, that Guggenheim; he says the relation is “distant”) [the Guggenheim is strongly associated with the visual arts by way the two Guggeheim museums, one in New York City and the other in Bilbao, Portugal], told me he insisted upon. Meanwhile, the technical method makes its way onto the gallery placards in an official-sounding way—“Creative Adversarial Network print.” But both sets of inspirations, machine-learning and Renaissance portraiture, get limited billing and zero explanation at the show. That was deliberate, Hoerle-Guggenheim said. He’s betting that the simple existence of a visually arresting AI painting will be enough to draw interest—and buyers. It would turn out to be a good bet.

The art market is just that: a market. Some of the most renowned names in art today, from Damien Hirst to Banksy, trade in the trade of art as much as—and perhaps even more than—in the production of images, objects, and aesthetics. No artist today can avoid entering that fray, Elgammal included. “Is he an artist?” Hoerle-Guggenheim asked himself of the computer scientist. “Now that he’s in this context, he must be.” But is that enough? In Sharp’s estimation, “Faceless Portraits Transcending Time” is a tech demo more than a deliberate oeuvre, even compared to the machine-learning-driven work of his design-and-technology M.F.A. students, who self-identify as artists first.

Judged as Banksy or Hirst might be, Elgammal’s most art-worthy work might be the Artrendex start-up itself, not the pigment-print portraits that its technology has output. Elgammal doesn’t treat his commercial venture like a secret, but he also doesn’t surface it as a beneficiary of his supposedly earnest solo gallery show. He’s argued that AI-made images constitute a kind of conceptual art, but conceptualists tend to privilege process over product or to make the process as visible as the product.

Hoerle-Guggenheim worked as a financial analyst for Hyatt before getting into the art business via some kind of consulting deal (he responded cryptically when I pressed him for details). …

This is a fascinating article and I have one last excerpt, which poses this question, is an AI artist-agent a collaborator or a medium? There ‘s also speculation about how AI artist-agents might impact the business of art (Note: Links have been removed),

… it’s odd to list AICAN as a collaborator—painters credit pigment as a medium, not as a partner. Even the most committed digital artists don’t present the tools of their own inventions that way; when they do, it’s only after years, or even decades, of ongoing use and refinement.

But Elgammal insists that the move is justified because the machine produces unexpected results. “A camera is a tool—a mechanical device—but it’s not creative,” he said. “Using a tool is an unfair term for AICAN. It’s the first time in history that a tool has had some kind of creativity, that it can surprise you.” Casey Reas, a digital artist who co-designed the popular visual-arts-oriented coding platform Processing, which he uses to create some of his fine art, isn’t convinced. “The artist should claim responsibility over the work rather than to cede that agency to the tool or the system they create,” he told me.

Elgammal’s financial interest in AICAN might explain his insistence on foregrounding its role. Unlike a specialized print-making technique or even the Processing coding environment, AICAN isn’t just a device that Elgammal created. It’s also a commercial enterprise.

Elgammal has already spun off a company, Artrendex, that provides “artificial-intelligence innovations for the art market.” One of them offers provenance authentication for artworks; another can suggest works a viewer or collector might appreciate based on an existing collection; another, a system for cataloging images by visual properties and not just by metadata, has been licensed by the Barnes Foundation to drive its collection-browsing website.

The company’s plans are more ambitious than recommendations and fancy online catalogs. When presenting on a panel about the uses of blockchain for managing art sales and provenance, Elgammal caught the attention of Jessica Davidson, an art consultant who advises artists and galleries in building collections and exhibits. Davidson had been looking for business-development partnerships, and she became intrigued by AICAN as a marketable product. “I was interested in how we can harness it in a compelling way,” she says.

The art market is just that: a market. Some of the most renowned names in art today, from Damien Hirst to Banksy, trade in the trade of art as much as—and perhaps even more than—in the production of images, objects, and aesthetics. No artist today can avoid entering that fray, Elgammal included. “Is he an artist?” Hoerle-Guggenheim asked himself of the computer scientist. “Now that he’s in this context, he must be.” But is that enough? In Sharp’s estimation, “Faceless Portraits Transcending Time” is a tech demo more than a deliberate oeuvre, even compared to the machine-learning-driven work of his design-and-technology M.F.A. students, who self-identify as artists first.

Judged as Banksy or Hirst might be, Elgammal’s most art-worthy work might be the Artrendex start-up itself, not the pigment-print portraits that its technology has output. Elgammal doesn’t treat his commercial venture like a secret, but he also doesn’t surface it as a beneficiary of his supposedly earnest solo gallery show. He’s argued that AI-made images constitute a kind of conceptual art, but conceptualists tend to privilege process over product or to make the process as visible as the product.

Hoerle-Guggenheim worked as a financial analyst[emphasis mine] for Hyatt before getting into the art business via some kind of consulting deal (he responded cryptically when I pressed him for details). …

If you have the time, I recommend reading Bogost’s March 6, 2019 article for The Atlantic in its entirety/ these excerpts don’t do it enough justice.

Portraiture: what does it mean these days?

After reading the article I have a few questions. What exactly do Bogost and the arty types in the article mean by the word ‘portrait’? “Portrait of Edmond de Belamy” is an image of someone who doesn’t and never has existed and the exhibit “Faceless Portraits Transcending Time,” features images that don’t bear much or, in some cases, any resemblance to human beings. Maybe this is considered a dull question by people in the know but I’m an outsider and I found the paradox: portraits of nonexistent people or nonpeople kind of interesting.

BTW, I double-checked my assumption about portraits and found this definition in the Portrait Wikipedia entry (Note: Links have been removed),

A portrait is a painting, photograph, sculpture, or other artistic representation of a person [emphasis mine], in which the face and its expression is predominant. The intent is to display the likeness, personality, and even the mood of the person. For this reason, in photography a portrait is generally not a snapshot, but a composed image of a person in a still position. A portrait often shows a person looking directly at the painter or photographer, in order to most successfully engage the subject with the viewer.

So, portraits that aren’t portraits give rise to some philosophical questions but Bogost either didn’t want to jump into that rabbit hole (segue into yet another topic) or, as I hinted earlier, may have assumed his audience had previous experience of those kinds of discussions.

Vancouver (Canada) and a ‘portraiture’ exhibit at the Rennie Museum

By one of life’s coincidences, Vancouver’s Rennie Museum had an exhibit (February 16 – June 15, 2019) that illuminates questions about art collecting and portraiture, From a February 7, 2019 Rennie Museum news release,

‘downloaded from https://renniemuseum.org/press-release-spring-2019-collected-works/] Courtesy: Rennie Museum

February 7, 2019

Press Release | Spring 2019: Collected Works
By rennie museum

rennie museum is pleased to present Spring 2019: Collected Works, a group exhibition encompassing the mediums of photography, painting and film. A portraiture of the collecting spirit [emphasis mine], the works exhibited invite exploration of what collected objects, and both the considered and unintentional ways they are displayed, inform us. Featuring the works of four artists—Andrew Grassie, William E. Jones, Louise Lawler and Catherine Opie—the exhibition runs from February 16 to June 15, 2019.

Four exquisite paintings by Scottish painter Andrew Grassie detailing the home and private storage space of a major art collector provide a peek at how the passionately devoted integrates and accommodates the physical embodiments of such commitment into daily life. Grassie’s carefully constructed, hyper-realistic images also pose the question, “What happens to art once it’s sold?” In the transition from pristine gallery setting to idiosyncratic private space, how does the new context infuse our reading of the art and how does the art shift our perception of the individual?

Furthering the inquiry into the symbiotic exchange between possessor and possession, a selection of images by American photographer Louise Lawler depicting art installed in various private and public settings question how the bilateral relationship permeates our interpretation when the collector and the collected are no longer immediately connected. What does de-acquisitioning an object inform us and how does provenance affect our consideration of the art?

The question of legacy became an unexpected facet of 700 Nimes Road (2010-2011), American photographer Catherine Opie’s portrait of legendary actress Elizabeth Taylor. Opie did not directly photograph Taylor for any of the fifty images in the expansive portfolio. Instead, she focused on Taylor’s home and the objects within, inviting viewers to see—then see beyond—the façade of fame and consider how both treasures and trinkets act as vignettes to the stories of a life. Glamorous images of jewels and trophies juxtapose with mundane shots of a printer and the remote-control user manual. Groupings of major artworks on the wall are as illuminating of the home’s mistress as clusters of personal photos. Taylor passed away part way through Opie’s project. The subsequent photos include Taylor’s mementos heading off to auction, raising the question, “Once the collections that help to define someone are disbursed, will our image of that person lose focus?”

In a similar fashion, the twenty-two photographs in Villa Iolas (1982/2017), by American artist and filmmaker William E. Jones, depict the Athens home of iconic art dealer and collector Alexander Iolas. Taken in 1982 by Jones during his first travels abroad, the photographs of art, furniture and antiquities tell a story of privilege that contrast sharply with the images Jones captures on a return visit in 2016. Nearly three decades after Iolas’s 1989 death, his home sits in dilapidation, looted and vandalized. Iolas played an extraordinary role in the evolution of modern art, building the careers of Max Ernst, Yves Klein and Giorgio de Chirico. He gave Andy Warhol his first solo exhibition and was a key advisor to famed collectors John and Dominique de Menil. Yet in the years since his death, his intention of turning his home into a modern art museum as a gift to Greece, along with his reputation, crumbled into ruins. The photographs taken by Jones during his visits in two different eras are incorporated into the film Fall into Ruin (2017), along with shots of contemporary Athens and antiquities on display at the National Archaeological Museum.

“I ask a lot of questions about how portraiture functionswhat is there to describe the person or time we live in or a certain set of politics…”
 – Catherine Opie, The Guardian, Feb 9, 2016

We tend to think of the act of collecting as a formal activity yet it can happen casually on a daily basis, often in trivial ways. While we readily acknowledge a collector consciously assembling with deliberate thought, we give lesser consideration to the arbitrary accumulations that each of us accrue. Be it master artworks, incidental baubles or random curios, the objects we acquire and surround ourselves with tell stories of who we are.

Andrew Grassie (Scotland, b. 1966) is a painter known for his small scale, hyper-realist works. He has been the subject of solo exhibitions at the Tate Britain; Talbot Rice Gallery, Edinburgh; institut supérieur des arts de Toulouse; and rennie museum, Vancouver, Canada. He lives and works in London, England.

William E. Jones (USA, b. 1962) is an artist, experimental film-essayist and writer. Jones’s work has been the subject of retrospectives at Tate Modern, London; Anthology Film Archives, New York; Austrian Film Museum, Vienna; and, Oberhausen Short Film Festival. He is a recipient of the John Simon Guggenheim Memorial Fellowship and the Creative Capital/Andy Warhol Foundation Arts Writers Grant. He lives and works in Los Angeles, USA.

Louise Lawler (USA, b. 1947) is a photographer and one of the foremost members of the Pictures Generation. Lawler was the subject of a major retrospective at the Museum of Modern Art, New York in 2017. She has held exhibitions at the Whitney Museum of American Art, New York; Stedelijk Museum, Amsterdam; National Museum of Art, Oslo; and Musée d’Art Moderne de La Ville de Paris. She lives and works in New York.

Catherine Opie (USA, b. 1961) is a photographer and educator. Her work has been exhibited at Wexner Center for the Arts, Ohio; Henie Onstad Art Center, Oslo; Los the Angeles County Museum of Art; Portland Art Museum; and the Guggenheim Museum, New York. She is the recipient of United States Artist Fellowship, Julius Shulman’s Excellence in Photography Award, and the Smithsonian’s Archive of American Art Medal.  She lives and works in Los Angeles.

rennie museum opened in October 2009 in historic Wing Sang, the oldest structure in Vancouver’s Chinatown, to feature dynamic exhibitions comprising only of art drawn from rennie collection. Showcasing works by emerging and established international artists, the exhibits, accompanied by supporting catalogues, are open free to the public through engaging guided tours. The museum’s commitment to providing access to arts and culture is also expressed through its education program, which offers free age-appropriate tours and customized workshops to children of all ages.

rennie collection is a globally recognized collection of contemporary art that focuses on works that tackle issues related to identity, social commentary and injustice, appropriation, and the nature of painting, photography, sculpture and film. Currently the collection includes works by over 370 emerging and established artists, with over fifty collected in depth. The Vancouver based collection engages actively with numerous museums globally through a robust, artist-centric, lending policy.

So despite the Wikipedia definition, it seems that portraits don’t always feature people. While Bogost didn’t jump into that particular rabbit hole, he did touch on the business side of art.

What about intellectual property?

Bogost doesn’t explicitly discuss this particular issue. It’s a big topic so I’m touching on it only lightly, if an artist worsk with an AI, the question as to ownership of the artwork could prove thorny. Is the copyright owner the computer scientist or the artist or both? Or does the AI artist-agent itself own the copyright? That last question may not be all that farfetched. Sophia, a social humanoid robot, has occasioned thought about ‘personhood.’ (Note: The robots mentioned in this posting have artificial intelligence.) From the Sophia (robot) Wikipedia entry (Note: Links have been removed),

Sophia has been interviewed in the same manner as a human, striking up conversations with hosts. Some replies have been nonsensical, while others have impressed interviewers such as 60 Minutes’ Charlie Rose.[12] In a piece for CNBC, when the interviewer expressed concerns about robot behavior, Sophia joked that he had “been reading too much Elon Musk. And watching too many Hollywood movies”.[27] Musk tweeted that Sophia should watch The Godfather and asked “what’s the worst that could happen?”[28][29] Business Insider’s chief UK editor Jim Edwards interviewed Sophia, and while the answers were “not altogether terrible”, he predicted it was a step towards “conversational artificial intelligence”.[30] At the 2018 Consumer Electronics Show, a BBC News reporter described talking with Sophia as “a slightly awkward experience”.[31]

On October 11, 2017, Sophia was introduced to the United Nations with a brief conversation with the United Nations Deputy Secretary-General, Amina J. Mohammed.[32] On October 25, at the Future Investment Summit in Riyadh, the robot was granted Saudi Arabian citizenship [emphasis mine], becoming the first robot ever to have a nationality.[29][33] This attracted controversy as some commentators wondered if this implied that Sophia could vote or marry, or whether a deliberate system shutdown could be considered murder. Social media users used Sophia’s citizenship to criticize Saudi Arabia’s human rights record. In December 2017, Sophia’s creator David Hanson said in an interview that Sophia would use her citizenship to advocate for women’s rights in her new country of citizenship; Newsweek criticized that “What [Hanson] means, exactly, is unclear”.[34] On November 27, 2018 Sophia was given a visa by Azerbaijan while attending Global Influencer Day Congress held in Baku. December 15, 2018 Sophia was appointed a Belt and Road Innovative Technology Ambassador by China'[35]

As for an AI artist-agent’s intellectual property rights , I have a July 10, 2017 posting featuring that question in more detail. Whether you read that piece or not, it seems obvious that artists might hesitate to call an AI agent, a partner rather than a medium of expression. After all, a partner (and/or the computer scientist who developed the programme) might expect to share in property rights and profits but paint, marble, plastic, and other media used by artists don’t have those expectations.

Moving slightly off topic , in my July 10, 2017 posting I mentioned a competition (literary and performing arts rather than visual arts) called, ‘Dartmouth College and its Neukom Institute Prizes in Computational Arts’. It was started in 2016 and, as of 2018, was still operational under this name: Creative Turing Tests. Assuming there’ll be contests for prizes in 2019, there’s (from the contest site) [1] PoetiX, competition in computer-generated sonnet writing; [2] Musical Style, composition algorithms in various styles, and human-machine improvisation …; and [3] DigiLit, algorithms able to produce “human-level” short story writing that is indistinguishable from an “average” human effort. You can find the contest site here.

Human Brain Project: update

The European Union’s Human Brain Project was announced in January 2013. It, along with the Graphene Flagship, had won a multi-year competition for the extraordinary sum of one million euros each to be paid out over a 10-year period. (My January 28, 2013 posting gives the details available at the time.)

At a little more than half-way through the project period, Ed Yong, in his July 22, 2019 article for The Atlantic, offers an update (of sorts),

Ten years ago, a neuroscientist said that within a decade he could simulate a human brain. Spoiler: It didn’t happen.

On July 22, 2009, the neuroscientist Henry Markram walked onstage at the TEDGlobal conference in Oxford, England, and told the audience that he was going to simulate the human brain, in all its staggering complexity, in a computer. His goals were lofty: “It’s perhaps to understand perception, to understand reality, and perhaps to even also understand physical reality.” His timeline was ambitious: “We can do it within 10 years, and if we do succeed, we will send to TED, in 10 years, a hologram to talk to you.” …

It’s been exactly 10 years. He did not succeed.

One could argue that the nature of pioneers is to reach far and talk big, and that it’s churlish to single out any one failed prediction when science is so full of them. (Science writers joke that breakthrough medicines and technologies always seem five to 10 years away, on a rolling window.) But Markram’s claims are worth revisiting for two reasons. First, the stakes were huge: In 2013, the European Commission awarded his initiative—the Human Brain Project (HBP)—a staggering 1 billion euro grant (worth about $1.42 billion at the time). Second, the HBP’s efforts, and the intense backlash to them, exposed important divides in how neuroscientists think about the brain and how it should be studied.

Markram’s goal wasn’t to create a simplified version of the brain, but a gloriously complex facsimile, down to the constituent neurons, the electrical activity coursing along them, and even the genes turning on and off within them. From the outset, the criticism to this approach was very widespread, and to many other neuroscientists, its bottom-up strategy seemed implausible to the point of absurdity. The brain’s intricacies—how neurons connect and cooperate, how memories form, how decisions are made—are more unknown than known, and couldn’t possibly be deciphered in enough detail within a mere decade. It is hard enough to map and model the 302 neurons of the roundworm C. elegans, let alone the 86 billion neurons within our skulls. “People thought it was unrealistic and not even reasonable as a goal,” says the neuroscientist Grace Lindsay, who is writing a book about modeling the brain.
And what was the point? The HBP wasn’t trying to address any particular research question, or test a specific hypothesis about how the brain works. The simulation seemed like an end in itself—an overengineered answer to a nonexistent question, a tool in search of a use. …

Markram seems undeterred. In a recent paper, he and his colleague Xue Fan firmly situated brain simulations within not just neuroscience as a field, but the entire arc of Western philosophy and human civilization. And in an email statement, he told me, “Political resistance (non-scientific) to the project has indeed slowed us down considerably, but it has by no means stopped us nor will it.” He noted the 140 people still working on the Blue Brain Project, a recent set of positive reviews from five external reviewers, and its “exponentially increasing” ability to “build biologically accurate models of larger and larger brain regions.”

No time frame, this time, but there’s no shortage of other people ready to make extravagant claims about the future of neuroscience. In 2014, I attended TED’s main Vancouver conference and watched the opening talk, from the MIT Media Lab founder Nicholas Negroponte. In his closing words, he claimed that in 30 years, “we are going to ingest information. …

I’m happy to see the update. As I recall, there was murmuring almost immediately about the Human Brain Project (HBP). I never got details but it seemed that people were quite actively unhappy about the disbursements. Of course, this kind of uproar is not unusual when great sums of money are involved and the Graphene Flagship also had its rocky moments.

As for Yong’s contribution, I’m glad he’s debunking some of the hype and glory associated with the current drive to colonize the human brain and other efforts (e.g. genetics) which they often claim are the ‘future of medicine’.

To be fair. Yong is focused on the brain simulation aspect of the HBP (and Markram’s efforts in the Blue Brain Project) but there are other HBP efforts, as well, even if brain simulation seems to be the HBP’s main interest.

After reading the article, I looked up Henry Markram’s Wikipedia entry and found this,

In 2013, the European Union funded the Human Brain Project, led by Markram, to the tune of $1.3 billion. Markram claimed that the project would create a simulation of the entire human brain on a supercomputer within a decade, revolutionising the treatment of Alzheimer’s disease and other brain disorders. Less than two years into it, the project was recognised to be mismanaged and its claims overblown, and Markram was asked to step down.[7][8]

On 8 October 2015, the Blue Brain Project published the first digital reconstruction and simulation of the micro-circuitry of a neonatal rat somatosensory cortex.[9]

I also looked up the Human Brain Project and, talking about their other efforts, was reminded that they have a neuromorphic computing platform, SpiNNaker (mentioned here in a January 24, 2019 posting; scroll down about 50% of the way). For anyone unfamiliar with the term, neuromorphic computing/engineering is what scientists call the effort to replicate the human brain’s ability to synthesize and process information in computing processors.

In fact, there was some discussion in 2013 that the Human Brain Project and the Graphene Flagship would have some crossover projects, e.g., trying to make computers more closely resemble human brains in terms of energy use and processing power.

The Human Brain Project’s (HBP) Silicon Brains webpage notes this about their neuromorphic computing platform,

Neuromorphic computing implements aspects of biological neural networks as analogue or digital copies on electronic circuits. The goal of this approach is twofold: Offering a tool for neuroscience to understand the dynamic processes of learning and development in the brain and applying brain inspiration to generic cognitive computing. Key advantages of neuromorphic computing compared to traditional approaches are energy efficiency, execution speed, robustness against local failures and the ability to learn.

Neuromorphic Computing in the HBP

In the HBP the neuromorphic computing Subproject carries out two major activities: Constructing two large-scale, unique neuromorphic machines and prototyping the next generation neuromorphic chips.

The large-scale neuromorphic machines are based on two complementary principles. The many-core SpiNNaker machine located in Manchester [emphasis mine] (UK) connects 1 million ARM processors with a packet-based network optimized for the exchange of neural action potentials (spikes). The BrainScaleS physical model machine located in Heidelberg (Germany) implements analogue electronic models of 4 Million neurons and 1 Billion synapses on 20 silicon wafers. Both machines are integrated into the HBP collaboratory and offer full software support for their configuration, operation and data analysis.

The most prominent feature of the neuromorphic machines is their execution speed. The SpiNNaker system runs at real-time, BrainScaleS is implemented as an accelerated system and operates at 10,000 times real-time. Simulations at conventional supercomputers typical run factors of 1000 slower than biology and cannot access the vastly different timescales involved in learning and development ranging from milliseconds to years.

Recent research in neuroscience and computing has indicated that learning and development are a key aspect for neuroscience and real world applications of cognitive computing. HBP is the only project worldwide addressing this need with dedicated novel hardware architectures.

I’ve highlighted Manchester because that’s a very important city where graphene is concerned. The UK’s National Graphene Institute is housed at the University of Manchester where graphene was first isolated in 2004 by two scientists, Andre Geim and Konstantin (Kostya) Novoselov. (For their effort, they were awarded the Nobel Prize for physics in 2010.)

Getting back to the HBP (and the Graphene Flagship for that matter), the funding should be drying up sometime around 2023 and I wonder if it will be possible to assess the impact.