Tag Archives: artificial intelligence (AI)

SFU’s Philippe Pasquier speaks at “The rise of Creative AI and its ethics” online event on Tuesday, January 11, 2022 at 6 am PST

Simon Fraser University’s (SFU) Metacreation Lab for Creative AI (artificial intelligence) in Vancouver, Canada, has just sent me (via email) a January 2022 newsletter, which you can find here. There are a two items I found of special interest.

Max Planck Centre for Humans and Machines Seminars

From the January 2022 newsletter,

Max Planck Institute Seminar – The rise of Creative AI & its ethics
January 11, 2022 at 15:00 pm [sic] CET | 6:00 am PST

Next Monday [sic], Philippe Pasquier, director of the Metacreation Labn will
be providing a seminar titled “The rise of Creative AI & its ethics”
[Tuesday, January 11, 2022] at the Max Planck Institute’s Centre for Humans and
Machine [sic].

The Centre for Humans and Machines invites interested attendees to
our public seminars, which feature scientists from our institute and
experts from all over the world. Their seminars usually take 1 hour and
provide an opportunity to meet the speaker afterwards.

The seminar is openly accessible to the public via Webex Access, and
will be a great opportunity to connect with colleagues and friends of
the Lab on European and East Coast time. For more information and the
link, head to the Centre for Humans and Machines’ Seminars page linked
below.

Max Planck Institute – Upcoming Events

The Centre’s seminar description offers an abstract for the talk and a profile of Philippe Pasquier,

Creative AI is the subfield of artificial intelligence concerned with the partial or complete automation of creative tasks. In turn, creative tasks are those for which the notion of optimality is ill-defined. Unlike car driving, chess moves, jeopardy answers or literal translations, creative tasks are more subjective in nature. Creative AI approaches have been proposed and evaluated in virtually every creative domain: design, visual art, music, poetry, cooking, … These algorithms most often perform at human-competitive or superhuman levels for their precise task. Two main use of these algorithms have emerged that have implications on workflows reminiscent of the industrial revolution:

– Augmentation (a.k.a, computer-assisted creativity or co-creativity): a human operator interacts with the algorithm, often in the context of already existing creative software.

– Automation (computational creativity): the creative task is performed entirely by the algorithms without human intervention in the generation process.

Both usages will have deep implications for education and work in creative fields. Away from the fear of strong – sentient – AI, taking over the world: What are the implications of these ongoing developments for students, educators and professionals? How will Creative AI transform the way we create, as well as what we create?

Philippe Pasquier is a professor at Simon Fraser University’s School for Interactive Arts and Technology, where he directs the Metacreation Lab for Creative AI since 2008. Philippe leads a research-creation program centred around generative systems for creative tasks. As such, he is a scientist specialized in artificial intelligence, a multidisciplinary media artist, an educator, and a community builder. His contributions range from theoretical research on generative systems, computational creativity, multi-agent systems, machine learning, affective computing, and evaluation methodologies. This work is applied in the creative software industry as well as through artistic practice in computer music, interactive and generative art.

Interpreting soundscapes

Folks at the Metacreation Lab have made available an interactive search engine for sounds, from the January 2022 newsletter,

Audio Metaphor is an interactive search engine that transforms users’ queries into soundscapes interpreting them.  Using state of the art algorithms for sound retrieval, segmentation, background and foreground classification, AuMe offers a way to explore the vast open source library of sounds available on the  freesound.org online community through natural language and its semantic, symbolic, and metaphorical expressions. 

We’re excited to see Audio Metaphor included  among many other innovative projects on Freesound Labs, a directory of projects, hacks, apps, research and other initiatives that use content from Freesound or use the Freesound API. Take a minute to check out the variety of projects applying creative coding, machine learning, and many other techniques towards the exploration of sound and music creation, generative music, and soundscape composition in diverse forms an interfaces.

Explore AuMe and other FreeSound Labs projects    

The Audio Metaphor (AuMe) webpage on the Metacreation Lab website has a few more details about the search engine,

Audio Metaphor (AuMe) is a research project aimed at designing new methodologies and tools for sound design and composition practices in film, games, and sound art. Through this project, we have identified the processes involved in working with audio recordings in creative environments, addressing these in our research by implementing computational systems that can assist human operations.

We have successfully developed Audio Metaphor for the retrieval of audio file recommendations from natural language texts, and even used phrases generated automatically from Twitter to sonify the current state of Web 2.0. Another significant achievement of the project has been in the segmentation and classification of environmental audio with composition-specific categories, which were then applied in a generative system approach. This allows users to generate sound design simply by entering textual prompts.

As we direct Audio Metaphor further toward perception and cognition, we will continue to contribute to the music information retrieval field through environmental audio classification and segmentation. The project will continue to be instrumental in the design and implementation of new tools for sound designers and artists.

See more information on the website audiometaphor.ca.

As for Freesound Labs, you can find them here.

Futures exhibition/festival with fish skin fashion and more at the Smithsonian (Washington, DC), Nov. 20, 2021 to July 6, 2022

Fish leather

Before getting to Futures, here’s a brief excerpt from a June 11, 2021 Smithsonian Magazine exhibition preview article by Gia Yetikyel about one of the contributors, Elisa Palomino-Perez (Note: A link has been removed),

Elisa Palomino-Perez sheepishly admits to believing she was a mermaid as a child. Growing up in Cuenca, Spain in the 1970s and ‘80s, she practiced synchronized swimming and was deeply fascinated with fish. Now, the designer’s love for shiny fish scales and majestic oceans has evolved into an empowering mission, to challenge today’s fashion industry to be more sustainable, by using fish skin as a material.

Luxury fashion is no stranger to the artist, who has worked with designers like Christian Dior, John Galliano and Moschino in her 30-year career. For five seasons in the early 2000s, Palomino-Perez had her own fashion brand, inspired by Asian culture and full of color and embroidery. It was while heading a studio for Galliano in 2002 that she first encountered fish leather: a material made when the skin of tuna, cod, carp, catfish, salmon, sturgeon, tilapia or pirarucu gets stretched, dried and tanned.

The history of using fish leather in fashion is a bit murky. The material does not preserve well in the archeological record, and it’s been often overlooked as a “poor person’s” material due to the abundance of fish as a resource. But Indigenous groups living on coasts and rivers from Alaska to Scandinavia to Asia have used fish leather for centuries. Icelandic fishing traditions can even be traced back to the ninth century. While assimilation policies, like banning native fishing rights, forced Indigenous groups to change their lifestyle, the use of fish skin is seeing a resurgence. Its rise in popularity in the world of sustainable fashion has led to an overdue reclamation of tradition for Indigenous peoples.

In 2017, Palomino-Perez embarked on a PhD in Indigenous Arctic fish skin heritage at London College of Fashion, which is a part of the University of the Arts in London (UAL), where she received her Masters of Arts in 1992. She now teaches at Central Saint Martins at UAL, while researching different ways of crafting with fish skin and working with Indigenous communities to carry on the honored tradition.

Yetikyel’s article is fascinating (apparently Nike has used fish leather in one of its sports shoes) and I encourage you to read her June 11, 2021 article, which also covers the history of fish leather use amongst indigenous peoples of the world.

I did some digging and found a few more stories about fish leather. The earlier one is a Canadian Broadcasting Corporation (CBC) November 16, 2017 online news article by Jane Adey,

Designer Arndis Johannsdottir holds up a stunning purse, decorated with shiny strips of gold and silver leather at Kirsuberjatred, an art and design store in downtown Reykjavik, Iceland.

The purse is one of many in a colourful window display that’s drawing in buyers.

Johannsdottir says customers’ eyes often widen when they discover the metallic material is fish skin. 

Johannsdottir, a fish-skin designing pioneer, first came across the product 35 years ago.

She was working as a saddle smith when a woman came into her shop with samples of fish skin her husband had tanned after the war. Hundreds of pieces had been lying in a warehouse for 40 years.

“Nobody wanted it because plastic came on the market and everybody was fond of plastic,” she said.

“After 40 years, it was still very, very strong and the colours were beautiful and … I fell in love with it immediately.”

Johannsdottir bought all the skins the woman had to offer, gave up saddle making and concentrated on fashionable fish skin.

Adey’s November 16, 2017 article goes on to mention another Icelandic fish leather business looking to make fish leather a fashion staple.

Chloe Williams’s April 28, 2020 article for Hakkai Magazine explores the process of making fish leather and the new interest in making it,

Tracy Williams slaps a plastic cutting board onto the dining room table in her home in North Vancouver, British Columbia. Her friend, Janey Chang, has already laid out the materials we will need: spoons, seashells, a stone, and snack-sized ziplock bags filled with semi-frozen fish. Williams says something in Squamish and then translates for me: “You are ready to make fish skin.”

Chang peels a folded salmon skin from one of the bags and flattens it on the table. “You can really have at her,” she says, demonstrating how to use the edge of the stone to rub away every fiber of flesh. The scales on the other side of the skin will have to go, too. On a sockeye skin, they come off easily if scraped from tail to head, she adds, “like rubbing a cat backwards.” The skin must be clean, otherwise it will rot or fail to absorb tannins that will help transform it into leather.

Williams and Chang are two of a scant but growing number of people who are rediscovering the craft of making fish skin leather, and they’ve agreed to teach me their methods. The two artists have spent the past five or six years learning about the craft and tying it back to their distinct cultural perspectives. Williams, a member of the Squamish Nation—her ancestral name is Sesemiya—is exploring the craft through her Indigenous heritage. Chang, an ancestral skills teacher at a Squamish Nation school, who has also begun teaching fish skin tanning in other BC communities, is linking the craft to her Chinese ancestry.

Before the rise of manufactured fabrics, Indigenous peoples from coastal and riverine regions around the world tanned or dried fish skins and sewed them into clothing. The material is strong and water-resistant, and it was essential to survival. In Japan, the Ainu crafted salmon skin into boots, which they strapped to their feet with rope. Along the Amur River in northeastern China and Siberia, Hezhen and Nivkh peoples turned the material into coats and thread. In northern Canada, the Inuit made clothing, and in Alaska, several peoples including the Alutiiq, Athabascan, and Yup’ik used fish skins to fashion boots, mittens, containers, and parkas. In the winter, Yup’ik men never left home without qasperrluk—loose-fitting, hooded fish skin parkas—which could double as shelter in an emergency. The men would prop up the hood with an ice pick and pin down the edges to make a tent-like structure.

On a Saturday morning, I visit Aurora Skala in Saanich on Vancouver Island, British Columbia, to learn about the step after scraping and tanning: softening. Skala, an anthropologist working in language revitalization, has taken an interest in making fish skin leather in her spare time. When I arrive at her house, a salmon skin that she has tanned in an acorn infusion—a cloudy, brown liquid now resting in a jar—is stretched out on the kitchen counter, ready to be worked.

Skala dips her fingers in a jar of sunflower oil and rubs it on her hands before massaging it into the skin. The skin smells only faintly of fish; the scent reminds me of salt and smoke, though the skin has been neither salted nor smoked. “Once you start this process, you can’t stop,” she says. If the skin isn’t worked consistently, it will stiffen as it dries.

Softening the leather with oil takes about four hours, Skala says. She stretches the skin between clenched hands, pulling it in every direction to loosen the fibers while working in small amounts of oil at a time. She’ll also work her skins across other surfaces for extra softening; later, she’ll take this piece outside and rub it back and forth along a metal cable attached to a telephone pole. Her pace is steady, unhurried, soothing. Back in the day, people likely made fish skin leather alongside other chores related to gathering and processing food or fibers, she says. The skin will be done when it’s soft and no longer absorbs oil.

Onto the exhibition.

Futures (November 20, 2021 to July 6, 2022 at the Smithsonian)

A February 24, 2021 Smithsonian Magazine article by Meilan Solly serves as an announcement for the Futures exhibition/festival (Note: Links have been removed),

When the Smithsonian’s Arts and Industries Building (AIB) opened to the public in 1881, observers were quick to dub the venue—then known as the National Museum—America’s “Palace of Wonders.” It was a fitting nickname: Over the next century, the site would go on to showcase such pioneering innovations as the incandescent light bulb, the steam locomotive, Charles Lindbergh’s Spirit of St. Louis and space-age rockets.

“Futures,” an ambitious, immersive experience set to open at AIB this November, will act as a “continuation of what the [space] has been meant to do” from its earliest days, says consulting curator Glenn Adamson. “It’s always been this launchpad for the Smithsonian itself,” he adds, paving the way for later museums as “a nexus between all of the different branches of the [Institution].” …

Part exhibition and part festival, “Futures”—timed to coincide with the Smithsonian’s 175th anniversary—takes its cue from the world’s fairs of the 19th and 20th centuries, which introduced attendees to the latest technological and scientific developments in awe-inspiring celebrations of human ingenuity. Sweeping in scale (the building-wide exploration spans a total of 32,000 square feet) and scope, the show is set to feature historic artifacts loaned from numerous Smithsonian museums and other institutions, large-scale installations, artworks, interactive displays and speculative designs. It will “invite all visitors to discover, debate and delight in the many possibilities for our shared future,” explains AIB director Rachel Goslins in a statement.

“Futures” is split into four thematic halls, each with its own unique approach to the coming centuries. “Futures Past” presents visions of the future imagined by prior generations, as told through objects including Alexander Graham Bell’s experimental telephone, an early android and a full-scale Buckminster Fuller geodesic dome. “In hindsight, sometimes [a prediction is] amazing,” says Adamson, who curated the history-centric section. “Sometimes it’s sort of funny. Sometimes it’s a little dismaying.”

Futures That Work” continues to explore the theme of technological advancement, but with a focus on problem-solving rather than the lessons of the past. Climate change is at the fore of this section, with highlighted solutions ranging from Capsula Mundi’s biodegradable burial urns to sustainable bricks made out of mushrooms and purely molecular artificial spices that cut down on food waste while preserving natural resources.

Futures That Inspire,” meanwhile, mimics AIB’s original role as a place of wonder and imagination. “If I were bringing a 7-year-old, this is probably where I would take them first,” says Adamson. “This is where you’re going to be encountering things that maybe look a bit more like science fiction”—for instance, flying cars, self-sustaining floating cities and Afrofuturist artworks.

The final exhibition hall, “Futures That Unite,” emphasizes human relationships, discussing how connections between people can produce a more equitable society. Among others, the list of featured projects includes (Im)possible Baby, a speculative design endeavor that imagines what same-sex couples’ children might look like if they shared both parents’ DNA, and Not The Only One (N’TOO), an A.I.-assisted oral history project. [all emphases mine]

I haven’t done justice to Solly’s February 24, 2021 article, which features embedded images and offers a more hopeful view of the future than is currently the fashion.

Futures asks: Would you like to plan the future?

Nate Berg’s November 22, 2021 article for Fast Company features an interactive urban planning game that’s part of the Futures exhibition/festival,

The Smithsonian Institution wants you to imagine the almost ideal city block of the future. Not the perfect block, not utopia, but the kind of urban place where you get most of what you want, and so does everybody else.

Call it urban design by compromise. With a new interactive multiplayer game, the museum is hoping to show that the urban spaces of the future can achieve mutual goals only by being flexible and open to the needs of other stakeholders.

The game is designed for three players, each in the role of either the city’s mayor, a real estate developer or an ecologist. The roles each have their own primary goals – the mayor wants a well-served populace, the developer wants to build successful projects, and the ecologist wants the urban environment to coexist with the natural environment. Each role takes turns adding to the block, either in discrete projects or by amending what another player has contributed. Options are varied, but include everything from traditional office buildings and parks to community centers and algae farms. The players each try to achieve their own goals on the block, while facing the reality that other players may push the design in unexpected directions. These tradeoffs and their impact on the block are explained by scores on four basic metrics: daylight, carbon footprint, urban density, and access to services. How each player builds onto the block can bring scores up or down.

To create the game, the Smithsonian teamed up with Autodesk, the maker of architectural design tools like AutoCAD, an industry standard. Autodesk developed a tool for AI-based generative design that offers up options for a city block’s design, using computing power to make suggestions on what could go where and how aiming to achieve one goal, like boosting residential density, might detract from or improve another set of goals, like creating open space. “Sometimes you’ll do something that you think is good but it doesn’t really help the overall score,” says Brian Pene, director of emerging technology at Autodesk. “So that’s really showing people to take these tradeoffs and try attributes other than what achieves their own goals.” The tool is meant to show not how AI can generate the perfect design, but how the differing needs of various stakeholders inevitably require some tradeoffs and compromises.

Futures online and in person

Here are links to Futures online and information about visiting in person,

For its 175th anniversary, the Smithsonian is looking forward.

What do you think of when you think of the future? FUTURES is the first building-wide exploration of the future on the National Mall. Designed by the award-winning Rockwell Group, FUTURES spans 32,000 square feet inside the Arts + Industries Building. Now on view until July 6, 2022, FUTURES is your guide to a vast array of interactives, artworks, technologies, and ideas that are glimpses into humanity’s next chapter. You are, after all, only the latest in a long line of future makers.

Smell a molecule. Clean your clothes in a wetland. Meditate with an AI robot. Travel through space and time. Watch water being harvested from air. Become an emoji. The FUTURES is yours to decide, debate, delight. We invite you to dream big, and imagine not just one future, but many possible futures on the horizon—playful, sustainable, inclusive. In moments of great change, we dare to be hopeful. How will you create the future you want to live in?

Happy New Year!

Ai-Da (robot artist) writes and performs poem honouring Dante’s 700th anniversary

Remarkable, eh? *ETA December 17, 2021 0910: I’m sorry about the big blank space and can’t figure out how to fix it.*

Who is Ai-Da?

Thank you to the contributor(s) of the Ai-Da (robot) Wikipedia entry (Note: Links have been removed),

Ai-Da was invented by gallerist Aidan Meller,[3] in collaboration with Engineered Arts, a Cornish robotics company.[4] Her drawing intelligence was developed by computer AI researchers at the University of Oxford,[5] and her drawing arm is the work of engineers based in Leeds.[4]

Ai-Da has her own website here (from the homepage),

Ai-Da is the world’s first ultra-realistic artist robot. She draws using cameras in her eyes, her AI algorithms, and her robotic arm. Created in February 2019, she had her first solo show at the University of Oxford, ‘Unsecured Futures’, where her [visual] art encouraged viewers to think about our rapidly changing world. She has since travelled and exhibited work internationally, and had her first show in a major museum, the Design Museum, in 2021. She continues to create art that challenges our notions of creativity in a post-humanist era.

Ai-Da – is it art?

The role and definition of art changes over time. Ai-Da’s work is art, because it reflects the enormous integration of technology in todays society. We recognise ‘art’ means different things to different people. 

Today, a dominant opinion is that art is created by the human, for other humans. This has not always been the case. The ancient Greeks felt art and creativity came from the Gods. Inspiration was divine inspiration. Today, a dominant mind-set is that of humanism, where art is an entirely human affair, stemming from human agency. However, current thinking suggests we are edging away from humanism, into a time where machines and algorithms influence our behaviour to a point where our ‘agency’ isn’t just our own. It is starting to get outsourced to the decisions and suggestions of algorithms, and complete human autonomy starts to look less robust. Ai-Da creates art, because art no longer has to be restrained by the requirement of human agency alone.  

It seems that Ai-Da has branched out from visual art into poetry. (I wonder how many of the arts Ai-Da can produce and/or perform?)

A divine comedy? Dante and Ai-Da

The 700th anniversary of poet Dante Alighieri’s death has occasioned an exhibition, DANTE: THE INVENTION OF CELEBRITY, 17 September 2021–9 January 2022, at Oxford’s Ashmolean Museum.

Professor Gervase Rosser (University of Oxford), exhibition curator, wrote this in his September 21, 2021 exhibition essay “Dante and the Robot: An encounter at the Ashmolean”,

Ai-Da, the world’s most modern humanoid artist, is involved in an exhibition about the poet and philosopher, Dante Alighieri, writer of the Divine Comedy, whose 700th anniversary is this year. A major exhibition, ‘Dante and the Invention of Celebrity’, opens at Oxford’s Ashmolean Museum this month, and includes an intervention by this most up-to-date robot artist.

..

Honours are being paid around the world to the author of what he called a Comedy because, unlike a tragedy, it began badly but ended well. From the darkness of hell, the work sees Dante journey through purgatory, before eventually arriving at the eternal light of paradise. What hold does a poem about the spiritual redemption of humanity, written so long ago, have on us today?

One challenge to both spirit and humanity in the 21st century is the power of artificial intelligence, created and unleashed by human ingenuity.  The scientists who introduced this term, AI, in the 1950s announced that ‘every aspect of learning or any other feature of intelligence can, in principle, be so precisely described that a machine can be made to simulate it’.

Over the course of a human lifetime, that prophecy has almost been realised.  Artificial intelligence has already taken the place of human thought, often in ways of which are not apparent. In medicine, AI promises to become both irreplaceable and inestimable.

But to an extent which we are, perhaps, frightened to acknowledge, AI monitors our consumption patterns, our taste in everything from food to culture, our perception of ourselves, even our political views. If we want to re-orientate ourselves and take a critical view of this, before it is too late to regain control, how can we do so?

Creative fiction offers a field in which our values and aspirations can be questioned. This year has seen the publication of Klara and the Sun, by Kazuo Ishiguro, which evokes a world, not many years into the future, in which humanoid AI robots have become the domestic servants and companions of all prosperous families.

One of the book’s characters asks a fundamental question about the human heart, ‘Do you think there is such a thing? Something that makes each of us special and individual?’

Art can make two things possible: through it, artificial intelligence, which remains largely unseen, can be made visible and tangible and it can be given a prophetic voice, which we can choose to heed or ignore.

These aims have motivated the creators of Ai-Da, the artist robot which, through a series of exhibitions, is currently provoking questions around the globe (from the United Nations headquarters in Geneva to Cairo, and from the Design Museum in London [UK] to Abu Dhabi) about the nature of human creativity, originality, and authenticity.

In the Ashmolean Museum’s Gallery 8, Dante  meets artificial intelligence, in a staged encounter deliberately designed to invite reflection on what it means to see the world; on the nature of creativity; and on the value of human relationships.

The juxtaposition of AI with the Divine Comedy, in a year in which the poem is being celebrated as a supreme achievement of the human spirit, is timely. The encounter, however, is not presented as a clash of incompatible opposites, but as a conversation.

This is the spirit in which Ai-Da has been developed by her inventors, Aidan Meller and Lucy Seal, in collaboration with technical teams in Oxford University and elsewhere. Significantly, she takes her name from Ada Lovelace [emphasis mine], a mathematician and writer who was belatedly recognised as the first programmer. At the time of her early death in 1852, at the age of 36, she was considering writing a visionary kind of mathematical poetry, and wrote about her idea of ‘poetical philosophy, poetical science’.

For the Ashmolean exhibition, Ai-Da has made works in response to the Comedy. The first focuses on one of the circles of Dante’s Purgatory. Here, the souls of the envious compensate for their lives on earth, which were partially, but not irredeemably, marred by their frustrated desire for the possessions of others.

My first thought on seeing the inventor’s name, Aidan Meller, was that he named the robot after himself; I did not pick up on the Ada Lovelace connection. I appreciate how smart this is especially as the name also references AI.

Finally, the excerpts don’t do justice to Rosser’s essay; I recommend reading it if you have the time.

True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)

The Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, which has been broadcast since November 1960, explored the world of emotional, empathic and creative artificial intelligence (AI) in a Friday, November 19, 2021 telecast titled, The Machine That Feels,

The Machine That Feels explores how artificial intelligence (AI) is catching up to us in ways once thought to be uniquely human: empathy, emotional intelligence and creativity.

As AI moves closer to replicating humans, it has the potential to reshape every aspect of our world – but most of us are unaware of what looms on the horizon.

Scientists see AI technology as an opportunity to address inequities and make a better, more connected world. But it also has the capacity to do the opposite: to stoke division and inequality and disconnect us from fellow humans. The Machine That Feels, from The Nature of Things, shows viewers what they need to know about a field that is advancing at a dizzying pace, often away from the public eye.

What does it mean when AI makes art? Can AI interpret and understand human emotions? How is it possible that AI creates sophisticated neural networks that mimic the human brain? The Machine That Feels investigates these questions, and more.

In Vienna, composer Walter Werzowa has — with the help of AI — completed Beethoven’s previously unfinished 10th symphony. By feeding data about Beethoven, his music, his style and the original scribbles on the 10th symphony into an algorithm, AI has created an entirely new piece of art.

In Atlanta, Dr. Ayanna Howard and her robotics lab at Georgia Tech are teaching robots how to interpret human emotions. Where others see problems, Howard sees opportunity: how AI can help fill gaps in education and health care systems. She believes we need a fundamental shift in how we perceive robots: let’s get humans and robots to work together to help others.

At Tufts University in Boston, a new type of biological robot has been created: the xenobot. The size of a grain of sand, xenobots are grown from frog heart and skin cells, and combined with the “mind” of a computer. Programmed with a specific task, they can move together to complete it. In the future, they could be used for environmental cleanup, digesting microplastics and targeted drug delivery (like releasing chemotherapy compounds directly into tumours).

The film includes interviews with global leaders, commentators and innovators from the AI field, including Geoff Hinton, Yoshua Bengio, Ray Kurzweil and Douglas Coupland, who highlight some of the innovative and cutting-edge AI technologies that are changing our world.

The Machine That Feels focuses on one central question: in the flourishing age of artificial intelligence, what does it mean to be human?

I’ll get back to that last bit, “… what does it mean to be human?” later.

There’s a lot to appreciate in this 44 min. programme. As you’d expect, there was a significant chunk of time devoted to research being done in the US but Poland and Japan also featured and Canadian content was substantive. A number of tricky topics were covered and transitions from one topic to the next were smooth.

In the end credits, I counted over 40 source materials from Getty Images, Google Canada, Gatebox, amongst others. It would have been interesting to find out which segments were produced by CBC.

David Suzuki’s (programme host) script was well written and his narration was enjoyable, engaging, and non-intrusive. That last quality is not always true of CBC hosts who can fall into the trap of overdramatizing the text.

Drilling down

I have followed artificial intelligence stories in a passive way (i.e., I don’t seek them out) for many years. Even so, there was a lot of material in the programme that was new to me.

For example, there was this love story (from the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage on the CBC),

In the The Machine That Feels, a documentary from The Nature of Things, we meet Kondo Akihiko, a Tokyo resident who “married” a hologram of virtual pop singer Hatsune Miku using a certificate issued by Gatebox (the marriage isn’t recognized by the state, and Gatebox acknowledges the union goes “beyond dimensions”).

I found Akihiko to be quite moving when he described his relationship, which is not unique. It seems some 4,000 men have ‘wed’ their digital companions, you can read about that and more on the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage.

What does it mean to be human?

Overall, this Nature of Things episode embraces certainty, which means the question of what it means to human is referenced rather than seriously discussed. An unanswerable philosophical question, the programme is ill-equipped to address it, especially since none of the commentators are philosophers or seem inclined to philosophize.

The programme presents AI as a juggernaut. Briefly mentioned is the notion that we need to make some decisions about how our juggernaut is developed and utilized. No one discusses how we go about making changes to systems that are already making critical decisions for us. (For more about AI and decision-making, see my February 28, 2017 posting and scroll down to the ‘Algorithms and big data’ subhead for Cathy O’Neil’s description of how important decisions that affect us are being made by AI systems. She is the author of the 2016 book, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’; still a timely read.)

In fact, the programme’s tone is mostly one of breathless excitement. A few misgivings are expressed, e.g,, one woman who has an artificial ‘texting friend’ (Replika; a chatbot app) noted that it can ‘get into your head’ when she had a chat where her ‘friend’ told her that all of a woman’s worth is based on her body; she pushed back but intimated that someone more vulnerable could find that messaging difficult to deal with.

The sequence featuring Akihiko and his hologram ‘wife’ is followed by one suggesting that people might become more isolated and emotionally stunted as they interact with artificial friends. It should be noted, Akihiko’s wife is described as ‘perfect’. I gather perfection means that you are always understanding and have no needs of your own. She also seems to be about 18″ high.

Akihiko has obviously been asked about his ‘wife’ before as his answers are ready. They boil down to “there are many types of relationships” and there’s nothing wrong with that. It’s an intriguing thought which is not explored.

Also unexplored, these relationships could be said to resemble slavery. After all, you pay for these friends over which you have control. But perhaps that’s alright since AI friends don’t have consciousness. Or do they? In addition to not being able to answer the question, “what is it to be human?” we still can’t answer the question, “what is consciousness?”

AI and creativity

The Nature of Things team works fast. ‘Beethoven X – The AI Project’ had its first performance on October 9, 2021. (See my October 1, 2021 post ‘Finishing Beethoven’s unfinished 10th Symphony’ for more information from Ahmed Elgammal’s (Director of the Art & AI Lab at Rutgers University) technical perspective on the project.

Briefly, Beethoven died before completing his 10th symphony and a number of computer scientists, musicologists, AI, and musicians collaborated to finish the symphony.)

The one listener (Felix Mayer, music professor at the Technical University Munich) in the hall during a performance doesn’t consider the work to be a piece of music. He does have a point. Beethoven left some notes but this ’10th’ is at least partly mathematical guesswork. A set of probabilities where an algorithm chooses which note comes next based on probability.

There was another artist also represented in the programme. Puzzlingly, it was the still living Douglas Coupland. In my opinion, he’s better known as a visual artist than a writer (his Wikipedia entry lists him as a novelist first) but he has succeeded greatly in both fields.

What makes his inclusion in the Nature of Things ‘The Machine That Feels’ programme puzzling, is that it’s not clear how he worked with artificial intelligence in a collaborative fashion. Here’s a description of Coupland’s ‘AI’ project from a June 29, 2021 posting by Chris Henry on the Google Outreach blog (Note: Links have been removed),

… when the opportunity presented itself to explore how artificial intelligence (AI) inspires artistic expression — with the help of internationally renowned Canadian artist Douglas Coupland — the Google Research team jumped on it. This collaboration, with the support of Google Arts & Culture, culminated in a project called Slogans for the Class of 2030, which spotlights the experiences of the first generation of young people whose lives are fully intertwined with the existence of AI. 

This collaboration was brought to life by first introducing Coupland’s written work to a machine learning language model. Machine learning is a form of AI that provides computer systems the ability to automatically learn from data. In this case, Google research scientists tuned a machine learning algorithm with Coupland’s 30-year body of written work — more than a million words — so it would familiarize itself with the author’s unique style of writing. From there, curated general-public social media posts on selected topics were added to teach the algorithm how to craft short-form, topical statements. [emphases mine]

Once the algorithm was trained, the next step was to process and reassemble suggestions of text for Coupland to use as inspiration to create twenty-five Slogans for the Class of 2030. [emphasis mine]

I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. “And from these outputs, the statements you see here in this project appeared like gems. Did I write them? Yes. No. Could they have existed without me? No.” [emphases mine]

So, the algorithms crunched through Coupland’s word and social media texts to produce slogans, which Coupland then ‘combed through’ to pick out 25 slogans for the ‘Slogans For The Class of 2030’ project. (Note: In the programme, he says that he started a sentence and then the AI system completed that sentence with material gleaned from his own writings, which brings to Exquisite Corpse, a collaborative game for writers originated by the Surrealists, possibly as early as 1918.)

The ‘slogans’ project also reminds me of William S. Burroughs and the cut-up technique used in his work. From the William S. Burroughs Cut-up technique webpage on the Language is a Virus website (Thank you to Lake Rain Vajra for a very interesting website),

The cutup is a mechanical method of juxtaposition in which Burroughs literally cuts up passages of prose by himself and other writers and then pastes them back together at random. This literary version of the collage technique is also supplemented by literary use of other media. Burroughs transcribes taped cutups (several tapes spliced into each other), film cutups (montage), and mixed media experiments (results of combining tapes with television, movies, or actual events). Thus Burroughs’s use of cutups develops his juxtaposition technique to its logical conclusion as an experimental prose method, and he also makes use of all contemporary media, expanding his use of popular culture.

[Burroughs says] “All writing is in fact cut-ups. A collage of words read heard overheard. What else? Use of scissors renders the process explicit and subject to extension and variation. Clear classical prose can be composed entirely of rearranged cut-ups. Cutting and rearranging a page of written words introduces a new dimension into writing enabling the writer to turn images in cinematic variation. Images shift sense under the scissors smell images to sound sight to sound to kinesthetic. This is where Rimbaud was going with his color of vowels. And his “systematic derangement of the senses.” The place of mescaline hallucination: seeing colors tasting sounds smelling forms.

“The cut-ups can be applied to other fields than writing. Dr Neumann [emphasis mine] in his Theory of Games and Economic behavior introduces the cut-up method of random action into game and military strategy: assume that the worst has happened and act accordingly. … The cut-up method could be used to advantage in processing scientific data. [emphasis mine] How many discoveries have been made by accident? We cannot produce accidents to order. The cut-ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second-rate product when you can have the best. And the best is there for all. Poetry is for everyone . . .”

First, John von Neumann (1902 – 57) is a very important figure in the history of computing. From a February 25, 2017 John von Neumann and Modern Computer Architecture essay on the ncLab website, “… he invented the computer architecture that we use today.”

Here’s Burroughs on the history of writers and cutups (thank you to QUEDEAR for posting this clip),

You can hear Burroughs talk about the technique and how he started using it in 1959.

There is no explanation from Coupland as to how his project differs substantively from Burroughs’ cut-ups or a session of Exquisite Corpse. The use of a computer programme to crunch through data and give output doesn’t seem all that exciting. *(More about computers and chatbots at end of posting).* It’s hard to know if this was an interview situation where he wasn’t asked the question or if the editors decided against including it.

Kazuo Ishiguro?

Given that Ishiguro’s 2021 book (Klara and the Sun) is focused on an artificial friend and raises the question of ‘what does it mean to be human’, as well as the related question, ‘what is the nature of consciousness’, it would have been interesting to hear from him. He spent a fair amount of time looking into research on machine learning in preparation for his book. Maybe he was too busy?

AI and emotions

The work being done by Georgia Tech’s Dr. Ayanna Howard and her robotics lab is fascinating. They are teaching robots how to interpret human emotions. The segment which features researchers teaching and interacting with robots, Pepper and Salt, also touches on AI and bias.

Watching two African American researchers talk about the ways in which AI is unable to read emotions on ‘black’ faces as accurately as ‘white’ faces is quite compelling. It also reinforces the uneasiness you might feel after the ‘Replika’ segment where an artificial friend informs a woman that her only worth is her body.

(Interestingly, Pepper and Salt are produced by Softbank Robotics, part of Softbank, a multinational Japanese conglomerate, [see a June 28, 2021 article by Ian Carlos Campbell for The Verge] whose entire management team is male according to their About page.)

While Howard is very hopeful about the possibilities of a machine that can read emotions, she doesn’t explore (on camera) any means for pushing back against bias other than training AI by using more black faces to help them learn. Perhaps more representative management and coding teams in technology companies?

While the programme largely focused on AI as an algorithm on a computer, robots can be enabled by AI (as can be seen in the segment with Dr. Howard).

My February 14, 2019 posting features research with a completely different approach to emotions and machines,

“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

[from a July 16, 2018 Cornell University news release on EurekAlert]

This brings the question back to, what is consciousness?

What scientists aren’t taught

Dr. Howard notes that scientists are not taught to consider the implications of their work. Her comment reminded me of a question I was asked many years ago after a presentation, it concerned whether or not science had any morality. (I said, no.)

My reply angered an audience member (a visual artist who was working with scientists at the time) as she took it personally and started defending scientists as good people who care and have morals and values. She failed to understand that the way in which we teach science conforms to a notion that somewhere there are scientific facts which are neutral and objective. Society and its values are irrelevant in the face of the larger ‘scientific truth’ and, as a consequence, you don’t need to teach or discuss how your values or morals affect that truth or what the social implications of your work might be.

Science is practiced without much if any thought to values. By contrast, there is the medical injunction, “Do no harm,” which suggests to me that someone recognized competing values. E.g., If your important and worthwhile research is harming people, you should ‘do no harm’.

The experts, the connections, and the Canadian content

It’s been a while since I’ve seen Ray Kurzweil mentioned but he seems to be getting more attention these days. (See this November 16, 2021 posting by Jonny Thomson titled, “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” on The Big Think for more). Note: I will have a little more about evolution later in this post.

Interestingly, Kurzweil is employed by Google these days (see his Wikipedia entry, the column to the right). So is Geoffrey Hinton, another one of the experts in the programme (see Hinton’s Wikipedia entry, the column to the right, under Institutions).

I’m not sure about Yoshu Bengio’s relationship with Google but he’s a professor at the Université de Montréal, and he’s the Scientific Director for Mila ((Quebec’s Artificial Intelligence research institute)) & IVADO (Institut de valorisation des données), Note: IVADO is not particularly relevant to what’s being discussed in this post.

As for Mila, the Canada Google blog in a November 21, 2016 posting notes a $4.5M grant to the institution,

Google invests $4.5 Million in Montreal AI Research

A new grant from Google for the Montreal Institute for Learning Algorithms (MILA) will fund seven faculty across a number of Montreal institutions and will help tackle some of the biggest challenges in machine learning and AI, including applications in the realm of systems that can understand and generate natural language. In other words, better understand a fan’s enthusiasm for Les Canadien [sic].

Google is expanding its academic support of deep learning at MILA, renewing Yoshua Bengio’s Focused Research Award and offering Focused Research Awards to MILA faculty at University of Montreal and McGill University:

Google reaffirmed their commitment to Mila in 2020 with a grant worth almost $4M (from a November 13, 2020 posting on the Mila website, Note: A link has been removed),

Google Canada announced today [November 13, 2020] that it will be renewing its funding of Mila – Quebec Artificial Intelligence Institute, with a generous pledge of nearly $4M over a three-year period. Google previously invested $4.5M US in 2016, enabling Mila to grow from 25 to 519 researchers.

In a piece written for Google’s Official Canada Blog, Yoshua Bengio, Mila Scientific Director, says that this year marked a “watershed moment for the Canadian AI community,” as the COVID-19 pandemic created unprecedented challenges that demanded rapid innovation and increased interdisciplinary collaboration between researchers in Canada and around the world.

COVID-19 has changed the world forever and many industries, from healthcare to retail, will need to adapt to thrive in our ‘new normal.’ As we look to the future and how priorities will shift, it is clear that AI is no longer an emerging technology but a useful tool that can serve to solve world problems. Google Canada recognizes not only this opportunity but the important task at hand and I’m thrilled they have reconfirmed their support of Mila with an additional $3,95 million funding grant until 22.

– Yoshua Bengio, for Google’s Official Canada Blog

Interesting, eh? Of course, Douglas Coupland is working with Google, presumably for money, and that would connect over 50% of the Canadian content (Douglas Coupland, Yoshua Bengio, and Geoffrey Hinton; Kurzweil is an American) in the programme to Google.

My hat’s off to Google’s marketing communications and public relations teams.

Anthony Morgan of Science Everywhere also provided some Canadian content. His LinkedIn profile indicates that he’s working on a PhD in molecular science, which is described this way, “My work explores the characteristics of learning environments, that support critical thinking and the relationship between critical thinking and wisdom.”

Morgan is also the founder and creative director of Science Everywhere, from his LinkedIn profile, “An events & media company supporting knowledge mobilization, community engagement, entrepreneurship and critical thinking. We build social tools for better thinking.”

There is this from his LinkedIn profile,

I develop, create and host engaging live experiences & media to foster critical thinking.

I’ve spent my 15+ years studying and working in psychology and science communication, thinking deeply about the most common individual and societal barriers to critical thinking. As an entrepreneur, I lead a team to create, develop and deploy cultural tools designed to address those barriers. As a researcher I study what we can do to reduce polarization around science.

There’s a lot more to Morgan (do look him up; he has connections to the CBC and other media outlets). The difficulty is: why was he chosen to talk about artificial intelligence and emotions and creativity when he doesn’t seem to know much about the topic? He does mention GPT-3, an AI programming language. He seems to be acting as an advocate for AI although he offers this bit of almost cautionary wisdom, “… algorithms are sets of instructions.” (You can can find out more about it in my April 27, 2021 posting. There’s also this November 26, 2021 posting [The Inherent Limitations of GPT-3] by Andrey Kurenkov, a PhD student with the Stanford [University] Vision and Learning Lab.)

Most of the cautionary commentary comes from Luke Stark, assistant professor at Western [Ontario] University’s Faculty of Information and Media Studies. He’s the one who mentions stunted emotional growth.

Before moving on, there is another set of connections through the Pan-Canadian Artificial Intelligence Strategy, a Canadian government science funding initiative announced in the 2017 federal budget. The funds allocated to the strategy are administered by the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio through Mila is associated with the strategy and CIFAR, as is Geoffrey Hinton through his position as Chief Scientific Advisor for the Vector Institute.

Evolution

Getting back to “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” Xenobots point in a disconcerting (for some of us) evolutionary direction.

I featured the work, which is being done at Tufts University in the US, in my June 21, 2021 posting, which includes an embedded video,

From a March 31, 2021 news item on ScienceDaily,

Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.

Get ready for Xenobots 2.0.

Also from an excerpt in the posting, the team has “created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory.”

Memory is key to intelligence and this work introduces the notion of ‘living’ robots which leads to questioning what constitutes life. ‘The Machine That Feels’ is already grappling with far too many questions to address this development but introducing the research here might have laid the groundwork for the next episode, The New Human, telecast on November 26, 2021,

While no one can be certain what will happen, evolutionary biologists and statisticians are observing trends that could mean our future feet only have four toes (so long, pinky toe) or our faces may have new combinations of features. The new humans might be much taller than their parents or grandparents, or have darker hair and eyes.

And while evolution takes a lot of time, we might not have to wait too long for a new version of ourselves.

Technology is redesigning the way we look and function — at a much faster pace than evolution. We are merging with technology more than ever before: our bodies may now have implanted chips, smart limbs, exoskeletons and 3D-printed organs. A revolutionary gene editing technique has given us the power to take evolution into our own hands and alter our own DNA. How long will it be before we are designing our children?

As the story about the xenobots doesn’t say, we could also take the evolution of another species into our hands.

David Suzuki, where are you?

Our programme host, David Suzuki surprised me. I thought that as an environmentalist he’d point out that the huge amounts of computing power needed for artificial intelligence as mentioned in the programme, constitutes an environmental issue. I also would have expected a geneticist like Suzuki might have some concerns with regard to xenobots but perhaps that’s being saved for the next episode (The New Human) of the Nature of Things.

Artificial stupidity

Thanks to Will Knight for introducing me to the term ‘artificial stupidity’. Knight, a senior writer covers artificial intelligence for WIRED magazine. According to its Wikipedia entry,

Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks.[1] However, within the field of computer science, artificial stupidity is also used to refer to a technique of “dumbing down” computer programs in order to deliberately introduce errors in their responses.

Knight was using the term in its humorous, derogatory form.

Finally

The episode certainly got me thinking if not quite in the way producers might have hoped. ‘The Machine That Feels’ is a glossy, pretty well researched piece of infotainment.

To be blunt, I like and have no problems with infotainment but it can be seductive. I found it easier to remember the artificial friends, wife, xenobots, and symphony than the critiques and concerns.

Hopefully, ‘The Machine That Feels’ stimulates more interest in some very important topics. If you missed the telecast, you can catch the episode here.

For anyone curious about predictive policing, which was mentioned in the Ayanna Howard segment, see my November 23, 2017 posting about Vancouver’s plunge into AI and car theft.

*ETA December 6, 2021: One of the first ‘chatterbots’ was ELIZA, a computer programme developed from1964 to 1966. The most famous ELIZA script was DOCTOR, where the programme simulated a therapist. Many early users believed ELIZA understood and could respond as a human would despite Joseph Weizenbaum’s (creator of the programme) insistence otherwise.

An algorithm for modern quilting

Caption: Each of the blocks in this quilt were designed using an algorithm-based tool developed by Stanford researchers. Credit: Mackenzie Leake

I love the colours. This research into quilting and artificial intelligence (AI) was presented at SIGGRAPH 2021 in August. (SIGGRAPH is, also known as, ACM SIGGRAPH or ‘Association for Computing Machinery’s Special Interest Group on Computer Graphics and Interactive Techniques’.)

A June 3, 2021 news item on ScienceDaily announced the presentation,

Stanford University computer science graduate student Mackenzie Leake has been quilting since age 10, but she never imagined the craft would be the focus of her doctoral dissertation. Included in that work is new prototype software that can facilitate pattern-making for a form of quilting called foundation paper piecing, which involves using a backing made of foundation paper to lay out and sew a quilted design.

Developing a foundation paper piece quilt pattern — which looks similar to a paint-by-numbers outline — is often non-intuitive. There are few formal guidelines for patterning and those that do exist are insufficient to assure a successful result.

“Quilting has this rich tradition and people make these very personal, cherished heirlooms but paper piece quilting often requires that people work from patterns that other people designed,” said Leake, who is a member of the lab of Maneesh Agrawala, the Forest Baskett Professor of Computer Science and director of the Brown Institute for Media Innovation at Stanford. “So, we wanted to produce a digital tool that lets people design the patterns that they want to design without having to think through all of the geometry, ordering and constraints.”

A paper describing this work is published and will be presented at the computer graphics conference SIGGRAPH 2021 in August.

A June 2, 2021 Stanford University news release (also on EurekAlert), which originated the news item, provides more detail,

Respecting the craft

In describing the allure of paper piece quilts, Leake cites the modern aesthetic and high level of control and precision. The seams of the quilt are sewn through the paper pattern and, as the seaming process proceeds, the individual pieces of fabric are flipped over to form the final design. All of this “sew and flip” action means the pattern must be produced in a careful order.

Poorly executed patterns can lead to loose pieces, holes, misplaced seams and designs that are simply impossible to complete. When quilters create their own paper piecing designs, figuring out the order of the seams can take considerable time – and still lead to unsatisfactory results.

“The biggest challenge that we’re tackling is letting people focus on the creative part and offload the mental energy of figuring out whether they can use this technique or not,” said Leake, who is lead author of the SIGGRAPH paper. “It’s important to me that we’re really aware and respectful of the way that people like to create and that we aren’t over-automating that process.”

This isn’t Leake’s first foray into computer-aided quilting. She previously designed a tool for improvisational quilting, which she presented [PatchProv: Supporting Improvistiional Design Practices for Modern Quilting by Mackenzie Leake, Frances Lai, Tovi Grossman, Daniel Wigdor, and Ben Lafreniere] at the human-computer interaction conference CHI in May [2021]. [Note: Links to the May 2021 conference and paper added by me.]

Quilting theory

Developing the algorithm at the heart of this latest quilting software required a substantial theoretical foundation. With few existing guidelines to go on, the researchers had to first gain a more formal understanding of what makes a quilt paper piece-able, and then represent that mathematically.

They eventually found what they needed in a particular graph structure, called a hypergraph. While so-called “simple” graphs can only connect data points by lines, a hypergraph can accommodate overlapping relationships between many data points. (A Venn diagram is a type of hypergraph.) The researchers found that a pattern will be paper piece-able if it can be depicted by a hypergraph whose edges can be removed one at a time in a specific order – which would correspond to how the seams are sewn in the pattern.

The prototype software allows users to sketch out a design and the underlying hypergraph-based algorithm determines what paper foundation patterns could make it possible – if any. Many designs result in multiple pattern options and users can adjust their sketch until they get a pattern they like. The researchers hope to make a version of their software publicly available this summer.

“I didn’t expect to be writing my computer science dissertation on quilting when I started,” said Leake. “But I found this really rich space of problems involving design and computation and traditional crafts, so there have been lots of different pieces we’ve been able to pull off and examine in that space.”

###

Researchers from University of California, Berkeley and Cornell University are co-authors of this paper. Agrawala is also an affiliate of the Institute for Human-Centered Artificial Intelligence (HAI).

An abstract for the paper “A Mathematical Foundation for Foundation Paper Pieceable Quilts” by Mackenzie Leake, Gilbert Bernstein, Abe Davis and Maneesh Agrawala can be found here along with links to a PDF of the full paper and video on YouTube.

Afterthought: I noticed that all of the co-authors for the May 2021 paper are from the University of Toronto and most of them including Mackenzie Leake are associated with that university’s Chatham Labs.

Finishing Beethoven’s unfinished 10th Symphony

Throughout the project, Beethoven’s genius loomed. Circe Denyer

This is an artificial intelligence (AI) story set to music. Professor Ahmed Elgammal (Director of the Art & AI Lab at Rutgers University located in New Jersey, US) has a September 24, 2021 essay posted on The Conversation (and, then, in the Smithsonian Magazine online) describing the AI project and upcoming album release and performance (Note: A link has been removed),

When Ludwig van Beethoven died in 1827, he was three years removed from the completion of his Ninth Symphony, a work heralded by many as his magnum opus. He had started work on his 10th Symphony but, due to deteriorating health, wasn’t able to make much headway: All he left behind were some musical sketches.

A full recording of Beethoven’s 10th Symphony is set to be released on Oct. 9, 2021, the same day as the world premiere performance scheduled to take place in Bonn, Germany – the culmination of a two-year-plus effort.

These excerpts from the Elgammal’s September 24, 2021 essay on the The Conversation provide a summarized view of events. By the way, this isn’t the first time an attempt has been made to finish Beethoven’s 10th Symphony (Note: Links have been removed),

Around 1817, the Royal Philharmonic Society in London commissioned Beethoven to write his Ninth and 10th symphonies. Written for an orchestra, symphonies often contain four movements: the first is performed at a fast tempo, the second at a slower one, the third at a medium or fast tempo, and the last at a fast tempo.

Beethoven completed his Ninth Symphony in 1824, which concludes with the timeless “Ode to Joy.”

But when it came to the 10th Symphony, Beethoven didn’t leave much behind, other than some musical notes and a handful of ideas he had jotted down.

There have been some past attempts to reconstruct parts of Beethoven’s 10th Symphony. Most famously, in 1988, musicologist Barry Cooper ventured to complete the first and second movements. He wove together 250 bars of music from the sketches to create what was, in his view, a production of the first movement that was faithful to Beethoven’s vision.

Yet the sparseness of Beethoven’s sketches made it impossible for symphony experts to go beyond that first movement.

In early 2019, Dr. Matthias Röder, the director of the Karajan Institute, an organization in Salzburg, Austria, that promotes music technology, contacted me. He explained that he was putting together a team to complete Beethoven’s 10th Symphony in celebration of the composer’s 250th birthday. Aware of my work on AI-generated art, he wanted to know if AI would be able to help fill in the blanks left by Beethoven.

Röder then compiled a team that included Austrian composer Walter Werzowa. Famous for writing Intel’s signature bong jingle, Werzowa was tasked with putting together a new kind of composition that would integrate what Beethoven left behind with what the AI would generate. Mark Gotham, a computational music expert, led the effort to transcribe Beethoven’s sketches and process his entire body of work so the AI could be properly trained.

The team also included Robert Levin, a musicologist at Harvard University who also happens to be an incredible pianist. Levin had previously finished a number of incomplete 18th-century works by Mozart and Johann Sebastian Bach.

… We didn’t have a machine that we could feed sketches to, push a button and have it spit out a symphony. Most AI available at the time couldn’t continue an uncompleted piece of music beyond a few additional seconds.

We would need to push the boundaries of what creative AI could do by teaching the machine Beethoven’s creative process – how he would take a few bars of music and painstakingly develop them into stirring symphonies, quartets and sonatas.

Here’s Elgammal’s description of the difficulties from an AI perspective, from the September 24, 2021 essay (Note: Links have been removed),

First, and most fundamentally, we needed to figure out how to take a short phrase, or even just a motif, and use it to develop a longer, more complicated musical structure, just as Beethoven would have done. For example, the machine had to learn how Beethoven constructed the Fifth Symphony out of a basic four-note motif.

Next, because the continuation of a phrase also needs to follow a certain musical form, whether it’s a scherzo, trio or fugue, the AI needed to learn Beethoven’s process for developing these forms.

The to-do list grew: We had to teach the AI how to take a melodic line and harmonize it. The AI needed to learn how to bridge two sections of music together. And we realized the AI had to be able to compose a coda, which is a segment that brings a section of a piece of music to its conclusion.

Finally, once we had a full composition, the AI was going to have to figure out how to orchestrate it, which involves assigning different instruments for different parts.

And it had to pull off these tasks in the way Beethoven might do so.

The team tested its work, from the September 24, 2021 essay, Note: A link has been removed,

In November 2019, the team met in person again – this time, in Bonn, at the Beethoven House Museum, where the composer was born and raised.

This meeting was the litmus test for determining whether AI could complete this project. We printed musical scores that had been developed by AI and built off the sketches from Beethoven’s 10th. A pianist performed in a small concert hall in the museum before a group of journalists, music scholars and Beethoven experts.

We challenged the audience to determine where Beethoven’s phrases ended and where the AI extrapolation began. They couldn’t.

A few days later, one of these AI-generated scores was played by a string quartet in a news conference. Only those who intimately knew Beethoven’s sketches for the 10th Symphony could determine when the AI-generated parts came in.

The success of these tests told us we were on the right track. But these were just a couple of minutes of music. There was still much more work to do.

There is a preview of the finished 10th symphony,

Beethoven X: The AI Project: III Scherzo. Allegro – Trio (Official Video) | Beethoven Orchestra Bonn

Modern Recordings / BMG present as a foretaste of the album “Beethoven X – The AI Project” (release: 8.10.) the edit of the 3rd movement “Scherzo. Allegro – Trio” as a classical music video. Listen now: https://lnk.to/BeethovenX-Scherzo

Album pre-order link: https://lnk.to/BeethovenX

The Beethoven Orchestra Bonn performing with Dirk Kaftan and Walter Werzowa a great recording of world-premiere Beethoven pieces. Developed by AI and music scientists as well as composers, Beethoven’s once unfinished 10th symphony now surprises with beautiful Beethoven-like harmonics and dynamics.

For anyone who’d like to hear the October 9, 2021 performance, Sharon Kelly included some details in her August 16, 2021 article for DiscoverMusic,

The world premiere of Beethoven’s 10th Symphony on 9 October 2021 at the Telekom Forum in Bonn, performed by the Beethoven Orchestra Bonn conducted by Dirk Kaftan, will be broadcast live and free of charge on MagentaMusik 360.

Sadly, the time is not listed but MagentaMusik 360 is fairly easy to find online.

You can find out more about Professor Elgammal on his Rutgers University profile page. Elgammal has graced this blog before in an August 16, 2019 posting “AI (artificial intelligence) artist got a show at a New York City art gallery“. He’s mentioned in an excerpt about 20% of the way down the page,

Ahmed Elgammal thinks AI art can be much more than that. A Rutgers University professor of computer science, Elgammal runs an art-and-artificial-intelligence lab, where he and his colleagues develop technologies that try to understand and generate new “art” (the scare quotes are Elgammal’s) with AI—not just credible copies of existing work, like GANs do. “That’s not art, that’s just repainting,” Elgammal says of GAN-made images. “It’s what a bad artist would do.”

Elgammal calls his approach a “creative adversarial network,” or CAN. It swaps a GAN’s discerner—the part that ensures similarity—for one that introduces novelty instead. The system amounts to a theory of how art evolves: through small alterations to a known style that produce a new one. That’s a convenient take, given that any machine-learning technique has to base its work on a specific training set.

Finally, thank you to @winsontang whose tweet led me to this story.

Precision skincare

An inkjet printer for your skin—it’s an idea I’m not sure I’m ready for. Still, I’m not the target market for the product being described in Rachel Kim Raczka’s June 2, 2021 article for Fast Company (Note: Links have been removed),

… I’ve had broken capillaries, patchy spots, and enlarged pores most of my adult life. And after I turned 30, I developed a glorious strip of melasma (a “sun mustache”) across my upper lip. The delicate balance of maintaining my “good” texture—skin that looks like skin—while disguising my “bad” texture is a constant push and pull. Still, I continue to fall victim to “no makeup” makeup, the frustratingly contradictory trend that will never die. A white whale that $599 high-tech beauty printer Opte hopes to fill.

Weirdly enough, “printer” is a fair representation of what Opte is. The size and shape of an electric razor, Opte’s Precision Wand’s tiny computer claims to detect and camouflage hyperpigmentation with a series of gentle swipes. The product deposits extremely small blends of white, yellow, and red pigments to hide discoloration using a blue LED and a hypersensitive camera that scans 200 photos per second. Opte then relies on an algorithm to apply color—housed in replaceable serum cartridges, delivered through 120 thermal inkjet nozzles—only onto contrasting patches of melanin via what CEO Matt Petersen calls “the world’s smallest inkjet printer.” 

Opte is a 15-year, 500,000-R&D-hour project developed under P&G Ventures, officially launched in 2020. While targeting hyperpigmentation was an end goal, the broader mission looked at focusing on “precision skincare.” …

… You start by dropping the included 11-ingredient serum cartridge into the pod; the $129 cartridges and refills come in three shades that the company says cover 98% of skin tones and last 90 days. The handheld device very loudly refills itself and displays instructions on a tiny screen on its handle. …

… While I can’t rely on the Opte to hide a blemish or dark circles—I’ll still need concealer to achieve that level of coverage—I can’t quite describe the “glowiness” using this gadget generates. With more use, I’ve come to retrain my brain to expect Opte to work more like an eraser than a crayon; it’s skincare, not makeup. My skin looks healthier and brighter but still, without a doubt, like my skin. 

There’s more discussion of how this product works in Raczka’s June 2, 2021 article and you can find the Opte website here. I have no idea if they ship this product outside the US or what that might cost.

Memristors, it’s all about the oxides

I have one research announcement from China and another from the Netherlands, both of which concern memristors and oxides.

China

A May 17, 2021 news item on Nanowerk announces work, which suggests that memristors may not need to rely solely on oxides but could instead utilize light more gainfully,

Scientists are getting better at making neuron-like junctions for computers that mimic the human brain’s random information processing, storage and recall. Fei Zhuge of the Chinese Academy of Sciences and colleagues reviewed the latest developments in the design of these ‘memristors’ for the journal Science and Technology of Advanced Materials …

Computers apply artificial intelligence programs to recall previously learned information and make predictions. These programs are extremely energy- and time-intensive: typically, vast volumes of data must be transferred between separate memory and processing units. To solve this issue, researchers have been developing computer hardware that allows for more random and simultaneous information transfer and storage, much like the human brain.

Electronic circuits in these ‘neuromorphic’ computers include memristors that resemble the junctions between neurons called synapses. Energy flows through a material from one electrode to another, much like a neuron firing a signal across the synapse to the next neuron. Scientists are now finding ways to better tune this intermediate material so the information flow is more stable and reliable.

I had no success locating the original news release, which originated the news item, but have found this May 17, 2021 news item on eedesignit.com, which provides the remaining portion of the news release.

“Oxides are the most widely used materials in memristors,” said Zhuge. “But oxide memristors have unsatisfactory stability and reliability. Oxide-based hybrid structures can effectively improve this.”

Memristors are usually made of an oxide-based material sandwiched between two electrodes. Researchers are getting better results when they combine two or more layers of different oxide-based materials between the electrodes. When an electrical current flows through the network, it induces ions to drift within the layers. The ions’ movements ultimately change the memristor’s resistance, which is necessary to send or stop a signal through the junction.

Memristors can be tuned further by changing the compounds used for electrodes or by adjusting the intermediate oxide-based materials. Zhuge and his team are currently developing optoelectronic neuromorphic computers based on optically-controlled oxide memristors. Compared to electronic memristors, photonic ones are expected to have higher operation speeds and lower energy consumption. They could be used to construct next generation artificial visual systems with high computing efficiency.

Now for a picture that accompanied the news release, which follows,

Fig. The all-optically controlled memristor developed for optoelectronic neuromorphic computing (Image by NIMTE)

Here’s the February 7, 2021 Ningbo Institute of Materials Technology and Engineering (NIMTE) press release featuring this work and a more technical description,

A research group led by Prof. ZHUGE Fei at the Ningbo Institute of Materials Technology and Engineering (NIMTE) of the Chinese Academy of Sciences (CAS) developed an all-optically controlled (AOC) analog memristor, whose memconductance can be reversibly tuned by varying only the wavelength of the controlling light.

As the next generation of artificial intelligence (AI), neuromorphic computing (NC) emulates the neural structure and operation of the human brain at the physical level, and thus can efficiently perform multiple advanced computing tasks such as learning, recognition and cognition.

Memristors are promising candidates for NC thanks to the feasibility of high-density 3D integration and low energy consumption. Among them, the emerging optoelectronic memristors are competitive by virtue of combining the advantages of both photonics and electronics. However, the reversible tuning of memconductance depends highly on the electric excitation, which have severely limited the development and application of optoelectronic NC.

To address this issue, researchers at NIMTE proposed a bilayered oxide AOC memristor, based on the relatively mature semiconductor material InGaZnO and a memconductance tuning mechanism of light-induced electron trapping and detrapping.

The traditional electrical memristors require strong electrical stimuli to tune their memconductance, leading to high power consumption, a large amount of Joule heat, microstructural change triggered by the Joule heat, and even high crosstalk in memristor crossbars.

On the contrary, the developed AOC memristor does not involve microstructure changes, and can operate upon weak light irradiation with light power density of only 20 μW cm-2, which has provided a new approach to overcome the instability of the memristor.

Specifically, the AOC memristor can serve as an excellent synaptic emulator and thus mimic spike-timing-dependent plasticity (STDP) which is an important learning rule in the brain, indicating its potential applications in AOC spiking neural networks for high-efficiency optoelectronic NC.

Moreover, compared to purely optical computing, the optoelectronic computing using our AOC memristor showed higher practical feasibility, on account of the simple structure and fabrication process of the device.

The study may shed light on the in-depth research and practical application of optoelectronic NC, and thus promote the development of the new generation of AI.

This work was supported by the National Natural Science Foundation of China (No. 61674156 and 61874125), the Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB32050204), and the Zhejiang Provincial Natural Science Foundation of China (No. LD19E020001).

Here’s a link to and a citation for the paper,

Hybrid oxide brain-inspired neuromorphic devices for hardware implementation of artificial intelligence by Jingrui Wang, Xia Zhuge & Fei Zhuge. Science and Technology of Advanced Materials Volume 22, 2021 – Issue 1 Pages 326-344 DOI: https://doi.org/10.1080/14686996.2021.1911277 Published online:14 May 2021

This paper appears to be open access.

Netherlands

In this case, a May 18, 2021 news item on Nanowerk marries oxides to spintronics,

Classic computers use binary values (0/1) to perform. By contrast, our brain cells can use more values to operate, making them more energy-efficient than computers. This is why scientists are interested in neuromorphic (brain-like) computing.

Physicists from the University of Groningen (the Netherlands) have used a complex oxide to create elements comparable to the neurons and synapses in the brain using spins, a magnetic property of electrons.

The press release, which follows, was accompanied by this image illustrating the work,

Caption: Schematic of the proposed device structure for neuromorphic spintronic memristors. The write path is between the terminals through the top layer (black dotted line), the read path goes through the device stack (red dotted line). The right side of the figure indicates how the choice of substrate dictates whether the device will show deterministic or probabilistic behaviour. Credit: Banerjee group, University of Groningen

A May 18, 2021 University of Groningen press release (also on EurekAlert), which originated the news item, adds more ‘spin’ to the story,

Although computers can do straightforward calculations much faster than humans, our brains outperform silicon machines in tasks like object recognition. Furthermore, our brain uses less energy than computers. Part of this can be explained by the way our brain operates: whereas a computer uses a binary system (with values 0 or 1), brain cells can provide more analogue signals with a range of values.

Thin films

The operation of our brains can be simulated in computers, but the basic architecture still relies on a binary system. That is why scientist look for ways to expand this, creating hardware that is more brain-like, but will also interface with normal computers. ‘One idea is to create magnetic bits that can have intermediate states’, says Tamalika Banerjee, Professor of Spintronics of Functional Materials at the Zernike Institute for Advanced Materials, University of Groningen. She works on spintronics, which uses a magnetic property of electrons called ‘spin’ to transport, manipulate and store information.

In this study, her PhD student Anouk Goossens, first author of the paper, created thin films of a ferromagnetic metal (strontium-ruthenate oxide, SRO) grown on a substrate of strontium titanate oxide. The resulting thin film contained magnetic domains that were perpendicular to the plane of the film. ‘These can be switched more efficiently than in-plane magnetic domains’, explains Goossens. By adapting the growth conditions, it is possible to control the crystal orientation in the SRO. Previously, out-of-plane magnetic domains have been made using other techniques, but these typically require complex layer structures.

Magnetic anisotropy

The magnetic domains can be switched using a current through a platinum electrode on top of the SRO. Goossens: ‘When the magnetic domains are oriented perfectly perpendicular to the film, this switching is deterministic: the entire domain will switch.’ However, when the magnetic domains are slightly tilted, the response is probabilistic: not all the domains are the same, and intermediate values occur when only part of the crystals in the domain have switched.

By choosing variants of the substrate on which the SRO is grown, the scientists can control its magnetic anisotropy. This allows them to produce two different spintronic devices. ‘This magnetic anisotropy is exactly what we wanted’, says Goossens. ‘Probabilistic switching compares to how neurons function, while the deterministic switching is more like a synapse.’

The scientists expect that in the future, brain-like computer hardware can be created by combining these different domains in a spintronic device that can be connected to standard silicon-based circuits. Furthermore, probabilistic switching would also allow for stochastic computing, a promising technology which represents continuous values by streams of random bits. Banerjee: ‘We have found a way to control intermediate states, not just for memory but also for computing.’

Here’s a link to and a citation for the paper,

Anisotropy and Current Control of Magnetization in SrRuO3/SrTiO3 Heterostructures for Spin-Memristors by A.S. Goossens, M.A.T. Leiviskä and T. Banerjee. Frontiers in Nanotechnology DOI: https://doi.org/10.3389/fnano.2021.680468 Published: 18 May 2021

This appears to be open access.

Speed up your reading with an interactive typeface

A May 12, 2021 news item on ScienceDaily brings news of a technology that makes reading easier,

AdaptiFont has recently been presented at CHI, the leading Conference on Human Factors in Computing.

Language is without doubt the most pervasive medium for exchanging knowledge between humans. However, spoken language or abstract text need to be made visible in order to be read, be it in print or on screen.

How does the way a text looks affect its readability, that is, how it is being read, processed, and understood? A team at TU Darmstadt’s Centre for Cognitive Science investigated this question at the intersection of perceptual science, cognitive science, and linguistics. Electronic text is even more complex. Texts are read on different devices under different external conditions. And although any digital text is formatted initially, users might resize it on screen, change brightness and contrast of the display, or even select a different font when reading text on the web.

A May 12, 2021 Technische Universitat Darmstadt (Technical University of Damstadt; Germany) press release (also on EurekAlert) provides more detail,

The team of researchers from TU Darmstadt now developed a system that leaves font design to the user’s visual system. First, they needed to come up with a way of synthesizing new fonts. This was achieved by using a machine learning algorithm, which learned the structure of fonts analysing 25 popular and classic typefaces. The system is capable of creating an infinite number of new fonts that are any intermediate form of others – for example, visually halfway between Helvetica and Times New Roman.

Since some fonts may make it more difficult to read the text, they may slow the reader down. Other fonts may help the user read more fluently. Measuring reading speed, a second algorithm can now generate more typefaces that increase the reading speed.

In a laboratory experiment, in which users read texts over one hour, the research team showed that their algorithm indeed generates new fonts that increase individual user’s reading speed. Interestingly all readers had their own personalized font that made reading especially easy for them. However: This individual favorite typeface does not necessarily fit in all situations. “AdaptiFont therefore can be understood as a system which creates fonts for an individual dynamically and continuously while reading, which maximizes the reading speed at the time of use. This may depend on the content of the text, whether you are tired, or perhaps are using different display devices,” explains Professor Constantin A. Rothkopf, Centre for Cognitive Science und head of the institute of Psychology of Information Processing at TU Darmstadt.

The AdaptiFont system was recently presented to the scientific community at the Conference on Human Factors in Computing Systems (CHI). A patent application has been filed. Future possible applications are with all electronic devices on which text is read.

There’s a 5 minute video featuring the work and narration for a researcher who speaks very quickly,

Here’s a link to and a citation for the paper,

AdaptiFont: Increasing Individuals’ Reading Sp0eed with a Generative Font Model and Bayesian Optimization by Florian Kadner, Yannik Keller, Constantin Rothkopf. CHI ’21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems May 2021 Article No.: 585 Pages 1-11 DOI: https://doi.org/10.1145/3411764.3445140 Published: 06 May 2021

This paper is open access.

Artificial intelligence is not mentioned but it’s hard to believe that adaptive learning by the software is anything other than a form of AI.

Nanosensors use AI to explore the biomolecular world

EPFL scientists have developed AI-powered nanosensors that let researchers track various kinds of biological molecules without disturbing them. Courtesy: École polytechnique fédérale de Lausanne (EPFL)

If you look at the big orange dot (representing the nanosensors?), you’ll see those purplish/fuschia objects resemble musical notes (biological molecules?). I think that brainlike object to the left and in light blue is the artificial intelligence (AI) component. (If anyone wants to correct my guesses or identify the bits I can’t, please feel free to add to the Comments for this blog.)

Getting back to my topic, keep the ‘musical notes’ in mind as you read about some of the latest research from l’École polytechnique fédérale de Lausanne (EPFL) in an April 7, 2021 news item on Nanowerk,

The tiny world of biomolecules is rich in fascinating interactions between a plethora of different agents such as intricate nanomachines (proteins), shape-shifting vessels (lipid complexes), chains of vital information (DNA) and energy fuel (carbohydrates). Yet the ways in which biomolecules meet and interact to define the symphony of life is exceedingly complex.

Scientists at the Bionanophotonic Systems Laboratory in EPFL’s School of Engineering have now developed a new biosensor that can be used to observe all major biomolecule classes of the nanoworld without disturbing them. Their innovative technique uses nanotechnology, metasurfaces, infrared light and artificial intelligence.

To each molecule its own melody

In this nano-sized symphony, perfect orchestration makes physiological wonders such as vision and taste possible, while slight dissonances can amplify into horrendous cacophonies leading to pathologies such as cancer and neurodegeneration.

An April 7, 2021 EPFL press release, which originated the news item, provides more detail,

“Tuning into this tiny world and being able to differentiate between proteins, lipids, nucleic acids and carbohydrates without disturbing their interactions is of fundamental importance for understanding life processes and disease mechanisms,” says Hatice Altug, the head of the Bionanophotonic Systems Laboratory. 

Light, and more specifically infrared light, is at the core of the biosensor developed by Altug’s team. Humans cannot see infrared light, which is beyond the visible light spectrum that ranges from blue to red. However, we can feel it in the form of heat in our bodies, as our molecules vibrate under the infrared light excitation.

Molecules consist of atoms bonded to each other and – depending on the mass of the atoms and the arrangement and stiffness of their bonds – vibrate at specific frequencies. This is similar to the strings on a musical instrument that vibrate at specific frequencies depending on their length. These resonant frequencies are molecule-specific, and they mostly occur in the infrared frequency range of the electromagnetic spectrum. 

“If you imagine audio frequencies instead of infrared frequencies, it’s as if each molecule has its own characteristic melody,” says Aurélian John-Herpin, a doctoral assistant at Altug’s lab and the first author of the publication. “However, tuning into these melodies is very challenging because without amplification, they are mere whispers in a sea of sounds. To make matters worse, their melodies can present very similar motifs making it hard to tell them apart.” 

Metasurfaces and artificial intelligence

The scientists solved these two issues using metasurfaces and AI. Metasurfaces are man-made materials with outstanding light manipulation capabilities at the nano scale, thereby enabling functions beyond what is otherwise seen in nature. Here, their precisely engineered meta-atoms made out of gold nanorods act like amplifiers of light-matter interactions by tapping into the plasmonic excitations resulting from the collective oscillations of free electrons in metals. “In our analogy, these enhanced interactions make the whispered molecule melodies more audible,” says John-Herpin.

AI is a powerful tool that can be fed with more data than humans can handle in the same amount of time and that can quickly develop the ability to recognize complex patterns from the data. John-Herpin explains, “AI can be imagined as a complete beginner musician who listens to the different amplified melodies and develops a perfect ear after just a few minutes and can tell the melodies apart, even when they are played together – like in an orchestra featuring many instruments simultaneously.” 

The first biosensor of its kind

When the scientists’ infrared metasurfaces are augmented with AI, the new sensor can be used to analyze biological assays featuring multiple analytes simultaneously from the major biomolecule classes and resolving their dynamic interactions. 

“We looked in particular at lipid vesicle-based nanoparticles and monitored their breakage through the insertion of a toxin peptide and the subsequent release of vesicle cargos of nucleotides and carbohydrates, as well as the formation of supported lipid bilayer patches on the metasurface,” says Altug.

This pioneering AI-powered, metasurface-based biosensor will open up exciting perspectives for studying and unraveling inherently complex biological processes, such as intercellular communication via exosomesand the interaction of nucleic acids and carbohydrates with proteins in gene regulation and neurodegeneration. 

“We imagine that our technology will have applications in the fields of biology, bioanalytics and pharmacology – from fundamental research and disease diagnostics to drug development,” says Altug. 

Here’s a link to and a citation for the paper,

Infrared Metasurface Augmented by Deep Learning for Monitoring Dynamics between All Major Classes of Biomolecules by Aurelian John‐Herpin, Deepthy Kavungal. Lea von Mücke, Hatice Altug. Advanced Materials Volume 33, Issue 14 April 8, 2021 2006054 DOI: https://doi.org/10.1002/adma.202006054 First published: 22 February 2021

This paper is open access.