Tag Archives: artificial intelligence (AI)

AI (artificial intelligence) artist got a show at a New York City art gallery

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

It has also, Bogost notes in his article, occasioned an art show (Note: Links have been removed),

… part of “Faceless Portraits Transcending Time,” an exhibition of prints recently shown [Februay 13 – March 5, 2019] at the HG Contemporary gallery in Chelsea, the epicenter of New York’s contemporary-art world. All of them were created by a computer.

The catalog calls the show a “collaboration between an artificial intelligence named AICAN and its creator, Dr. Ahmed Elgammal,” a move meant to spotlight, and anthropomorphize, the machine-learning algorithm that did most of the work. According to HG Contemporary, it’s the first solo gallery exhibit devoted to an AI artist.

If they hadn’t found each other in the New York art scene, the players involved could have met on a Spike Jonze film set: a computer scientist commanding five-figure print sales from software that generates inkjet-printed images; a former hotel-chain financial analyst turned Chelsea techno-gallerist with apparent ties to fine-arts nobility; a venture capitalist with two doctoral degrees in biomedical informatics; and an art consultant who put the whole thing together, A-Team–style, after a chance encounter at a blockchain conference. Together, they hope to reinvent visual art, or at least to cash in on machine-learning hype along the way.

The show in New York City, “Faceless Portraits …,” exhibited work by an artificially intelligent artist-agent (I’m creating a new term to suit my purposes) that’s different than the one used by Obvious to create “Portrait of Edmond de Belamy,” As noted earlier, it sold for a lot of money (Note: Links have been removed),

Bystanders in and out of the art world were shocked. The print had never been shown in galleries or exhibitions before coming to market at auction, a channel usually reserved for established work. The winning bid was made anonymously by telephone, raising some eyebrows; art auctions can invite price manipulation. It was created by a computer program that generates new images based on patterns in a body of existing work, whose features the AI “learns.” What’s more, the artists who trained and generated the work, the French collective Obvious, hadn’t even written the algorithm or the training set. They just downloaded them, made some tweaks, and sent the results to market.

“We are the people who decided to do this,” the Obvious member Pierre Fautrel said in response to the criticism, “who decided to print it on canvas, sign it as a mathematical formula, put it in a gold frame.” A century after Marcel Duchamp made a urinal into art [emphasis mine] by putting it in a gallery, not much has changed, with or without computers. As Andy Warhol famously said, “Art is what you can get away with.”

A bit of a segue here, there is a controversy as to whether or not that ‘urinal art’, also known as, The Fountain, should be attributed to Duchamp as noted in my January 23, 2019 posting titled ‘Baroness Elsa von Freytag-Loringhoven, Marcel Duchamp, and the Fountain’.

Getting back to the main action, Bogost goes on to describe the technologies underlying the two different AI artist-agents (Note: Links have been removed),

… Using a computer is hardly enough anymore; today’s machines offer all kinds of ways to generate images that can be output, framed, displayed, and sold—from digital photography to artificial intelligence. Recently, the fashionable choice has become generative adversarial networks, or GANs, the technology that created Portrait of Edmond de Belamy. Like other machine-learning methods, GANs use a sample set—in this case, art, or at least images of it—to deduce patterns, and then they use that knowledge to create new pieces. A typical Renaissance portrait, for example, might be composed as a bust or three-quarter view of a subject. The computer may have no idea what a bust is, but if it sees enough of them, it might learn the pattern and try to replicate it in an image.

GANs use two neural nets (a way of processing information modeled after the human brain) to produce images: a “generator” and a “discerner.” The generator produces new outputs—images, in the case of visual art—and the discerner tests them against the training set to make sure they comply with whatever patterns the computer has gleaned from that data. The quality or usefulness of the results depends largely on having a well-trained system, which is difficult.

That’s why folks in the know were upset by the Edmond de Belamy auction. The image was created by an algorithm the artists didn’t write, trained on an “Old Masters” image set they also didn’t create. The art world is no stranger to trend and bluster driving attention, but the brave new world of AI painting appeared to be just more found art, the machine-learning equivalent of a urinal on a plinth.

Ahmed Elgammal thinks AI art can be much more than that. A Rutgers University professor of computer science, Elgammal runs an art-and-artificial-intelligence lab, where he and his colleagues develop technologies that try to understand and generate new “art” (the scare quotes are Elgammal’s) with AI—not just credible copies of existing work, like GANs do. “That’s not art, that’s just repainting,” Elgammal says of GAN-made images. “It’s what a bad artist would do.”

Elgammal calls his approach a “creative adversarial network,” or CAN. It swaps a GAN’s discerner—the part that ensures similarity—for one that introduces novelty instead. The system amounts to a theory of how art evolves: through small alterations to a known style that produce a new one. That’s a convenient take, given that any machine-learning technique has to base its work on a specific training set.

The results are striking and strange, although calling them a new artistic style might be a stretch. They’re more like credible takes on visual abstraction. The images in the show, which were produced based on training sets of Renaissance portraits and skulls, are more figurative, and fairly disturbing. Their gallery placards name them dukes, earls, queens, and the like, although they depict no actual people—instead, human-like figures, their features smeared and contorted yet still legible as portraiture. Faceless Portrait of a Merchant, for example, depicts a torso that might also read as the front legs and rear haunches of a hound. Atop it, a fleshy orb comes across as a head. The whole scene is rippled by the machine-learning algorithm, in the way of so many computer-generated artworks.

Faceless Portrait of a Merchant, one of the AI portraits produced by Ahmed Elgammal and AICAN. (Artrendex Inc.) [downloaded from https://www.theatlantic.com/technology/archive/2019/03/ai-created-art-invades-chelsea-gallery-scene/584134/]

Bogost consults an expert on portraiture for a discussion about the particularities of portraiture and the shortcomings one might expect of an AI artist-agent (Note: A link has been removed),

“You can’t really pick a form of painting that’s more charged with cultural meaning than portraiture,” John Sharp, an art historian trained in 15th-century Italian painting and the director of the M.F.A. program in design and technology at Parsons School of Design, told me. The portrait isn’t just a style, it’s also a host for symbolism. “For example, men might be shown with an open book to show how they are in dialogue with that material; or a writing implement, to suggest authority; or a weapon, to evince power.” Take Portrait of a Youth Holding an Arrow, an early-16th-century Boltraffio portrait that helped train the AICAN database for the show. The painting depicts a young man, believed to be the Bolognese poet Girolamo Casio, holding an arrow at an angle in his fingers and across his chest. It doubles as both weapon and quill, a potent symbol of poetry and aristocracy alike. Along with the arrow, the laurels in Casio’s hair are emblems of Apollo, the god of both poetry and archery.

A neural net couldn’t infer anything about the particular symbolic trappings of the Renaissance or antiquity—unless it was taught to, and that wouldn’t happen just by showing it lots of portraits. For Sharp and other critics of computer-generated art, the result betrays an unforgivable ignorance about the supposed influence of the source material.

But for the purposes of the show, the appeal to the Renaissance might be mostly a foil, a way to yoke a hip, new technology to traditional painting in order to imbue it with the gravity of history: not only a Chelsea gallery show, but also an homage to the portraiture found at the Met. To reinforce a connection to the cradle of European art, some of the images are presented in elaborate frames, a decision the gallerist, Philippe Hoerle-Guggenheim (yes, that Guggenheim; he says the relation is “distant”) [the Guggenheim is strongly associated with the visual arts by way the two Guggeheim museums, one in New York City and the other in Bilbao, Portugal], told me he insisted upon. Meanwhile, the technical method makes its way onto the gallery placards in an official-sounding way—“Creative Adversarial Network print.” But both sets of inspirations, machine-learning and Renaissance portraiture, get limited billing and zero explanation at the show. That was deliberate, Hoerle-Guggenheim said. He’s betting that the simple existence of a visually arresting AI painting will be enough to draw interest—and buyers. It would turn out to be a good bet.

The art market is just that: a market. Some of the most renowned names in art today, from Damien Hirst to Banksy, trade in the trade of art as much as—and perhaps even more than—in the production of images, objects, and aesthetics. No artist today can avoid entering that fray, Elgammal included. “Is he an artist?” Hoerle-Guggenheim asked himself of the computer scientist. “Now that he’s in this context, he must be.” But is that enough? In Sharp’s estimation, “Faceless Portraits Transcending Time” is a tech demo more than a deliberate oeuvre, even compared to the machine-learning-driven work of his design-and-technology M.F.A. students, who self-identify as artists first.

Judged as Banksy or Hirst might be, Elgammal’s most art-worthy work might be the Artrendex start-up itself, not the pigment-print portraits that its technology has output. Elgammal doesn’t treat his commercial venture like a secret, but he also doesn’t surface it as a beneficiary of his supposedly earnest solo gallery show. He’s argued that AI-made images constitute a kind of conceptual art, but conceptualists tend to privilege process over product or to make the process as visible as the product.

Hoerle-Guggenheim worked as a financial analyst for Hyatt before getting into the art business via some kind of consulting deal (he responded cryptically when I pressed him for details). …

This is a fascinating article and I have one last excerpt, which poses this question, is an AI artist-agent a collaborator or a medium? There ‘s also speculation about how AI artist-agents might impact the business of art (Note: Links have been removed),

… it’s odd to list AICAN as a collaborator—painters credit pigment as a medium, not as a partner. Even the most committed digital artists don’t present the tools of their own inventions that way; when they do, it’s only after years, or even decades, of ongoing use and refinement.

But Elgammal insists that the move is justified because the machine produces unexpected results. “A camera is a tool—a mechanical device—but it’s not creative,” he said. “Using a tool is an unfair term for AICAN. It’s the first time in history that a tool has had some kind of creativity, that it can surprise you.” Casey Reas, a digital artist who co-designed the popular visual-arts-oriented coding platform Processing, which he uses to create some of his fine art, isn’t convinced. “The artist should claim responsibility over the work rather than to cede that agency to the tool or the system they create,” he told me.

Elgammal’s financial interest in AICAN might explain his insistence on foregrounding its role. Unlike a specialized print-making technique or even the Processing coding environment, AICAN isn’t just a device that Elgammal created. It’s also a commercial enterprise.

Elgammal has already spun off a company, Artrendex, that provides “artificial-intelligence innovations for the art market.” One of them offers provenance authentication for artworks; another can suggest works a viewer or collector might appreciate based on an existing collection; another, a system for cataloging images by visual properties and not just by metadata, has been licensed by the Barnes Foundation to drive its collection-browsing website.

The company’s plans are more ambitious than recommendations and fancy online catalogs. When presenting on a panel about the uses of blockchain for managing art sales and provenance, Elgammal caught the attention of Jessica Davidson, an art consultant who advises artists and galleries in building collections and exhibits. Davidson had been looking for business-development partnerships, and she became intrigued by AICAN as a marketable product. “I was interested in how we can harness it in a compelling way,” she says.

The art market is just that: a market. Some of the most renowned names in art today, from Damien Hirst to Banksy, trade in the trade of art as much as—and perhaps even more than—in the production of images, objects, and aesthetics. No artist today can avoid entering that fray, Elgammal included. “Is he an artist?” Hoerle-Guggenheim asked himself of the computer scientist. “Now that he’s in this context, he must be.” But is that enough? In Sharp’s estimation, “Faceless Portraits Transcending Time” is a tech demo more than a deliberate oeuvre, even compared to the machine-learning-driven work of his design-and-technology M.F.A. students, who self-identify as artists first.

Judged as Banksy or Hirst might be, Elgammal’s most art-worthy work might be the Artrendex start-up itself, not the pigment-print portraits that its technology has output. Elgammal doesn’t treat his commercial venture like a secret, but he also doesn’t surface it as a beneficiary of his supposedly earnest solo gallery show. He’s argued that AI-made images constitute a kind of conceptual art, but conceptualists tend to privilege process over product or to make the process as visible as the product.

Hoerle-Guggenheim worked as a financial analyst[emphasis mine] for Hyatt before getting into the art business via some kind of consulting deal (he responded cryptically when I pressed him for details). …

If you have the time, I recommend reading Bogost’s March 6, 2019 article for The Atlantic in its entirety/ these excerpts don’t do it enough justice.

Portraiture: what does it mean these days?

After reading the article I have a few questions. What exactly do Bogost and the arty types in the article mean by the word ‘portrait’? “Portrait of Edmond de Belamy” is an image of someone who doesn’t and never has existed and the exhibit “Faceless Portraits Transcending Time,” features images that don’t bear much or, in some cases, any resemblance to human beings. Maybe this is considered a dull question by people in the know but I’m an outsider and I found the paradox: portraits of nonexistent people or nonpeople kind of interesting.

BTW, I double-checked my assumption about portraits and found this definition in the Portrait Wikipedia entry (Note: Links have been removed),

A portrait is a painting, photograph, sculpture, or other artistic representation of a person [emphasis mine], in which the face and its expression is predominant. The intent is to display the likeness, personality, and even the mood of the person. For this reason, in photography a portrait is generally not a snapshot, but a composed image of a person in a still position. A portrait often shows a person looking directly at the painter or photographer, in order to most successfully engage the subject with the viewer.

So, portraits that aren’t portraits give rise to some philosophical questions but Bogost either didn’t want to jump into that rabbit hole (segue into yet another topic) or, as I hinted earlier, may have assumed his audience had previous experience of those kinds of discussions.

Vancouver (Canada) and a ‘portraiture’ exhibit at the Rennie Museum

By one of life’s coincidences, Vancouver’s Rennie Museum had an exhibit (February 16 – June 15, 2019) that illuminates questions about art collecting and portraiture, From a February 7, 2019 Rennie Museum news release,

‘downloaded from https://renniemuseum.org/press-release-spring-2019-collected-works/] Courtesy: Rennie Museum

February 7, 2019

Press Release | Spring 2019: Collected Works
By rennie museum

rennie museum is pleased to present Spring 2019: Collected Works, a group exhibition encompassing the mediums of photography, painting and film. A portraiture of the collecting spirit [emphasis mine], the works exhibited invite exploration of what collected objects, and both the considered and unintentional ways they are displayed, inform us. Featuring the works of four artists—Andrew Grassie, William E. Jones, Louise Lawler and Catherine Opie—the exhibition runs from February 16 to June 15, 2019.

Four exquisite paintings by Scottish painter Andrew Grassie detailing the home and private storage space of a major art collector provide a peek at how the passionately devoted integrates and accommodates the physical embodiments of such commitment into daily life. Grassie’s carefully constructed, hyper-realistic images also pose the question, “What happens to art once it’s sold?” In the transition from pristine gallery setting to idiosyncratic private space, how does the new context infuse our reading of the art and how does the art shift our perception of the individual?

Furthering the inquiry into the symbiotic exchange between possessor and possession, a selection of images by American photographer Louise Lawler depicting art installed in various private and public settings question how the bilateral relationship permeates our interpretation when the collector and the collected are no longer immediately connected. What does de-acquisitioning an object inform us and how does provenance affect our consideration of the art?

The question of legacy became an unexpected facet of 700 Nimes Road (2010-2011), American photographer Catherine Opie’s portrait of legendary actress Elizabeth Taylor. Opie did not directly photograph Taylor for any of the fifty images in the expansive portfolio. Instead, she focused on Taylor’s home and the objects within, inviting viewers to see—then see beyond—the façade of fame and consider how both treasures and trinkets act as vignettes to the stories of a life. Glamorous images of jewels and trophies juxtapose with mundane shots of a printer and the remote-control user manual. Groupings of major artworks on the wall are as illuminating of the home’s mistress as clusters of personal photos. Taylor passed away part way through Opie’s project. The subsequent photos include Taylor’s mementos heading off to auction, raising the question, “Once the collections that help to define someone are disbursed, will our image of that person lose focus?”

In a similar fashion, the twenty-two photographs in Villa Iolas (1982/2017), by American artist and filmmaker William E. Jones, depict the Athens home of iconic art dealer and collector Alexander Iolas. Taken in 1982 by Jones during his first travels abroad, the photographs of art, furniture and antiquities tell a story of privilege that contrast sharply with the images Jones captures on a return visit in 2016. Nearly three decades after Iolas’s 1989 death, his home sits in dilapidation, looted and vandalized. Iolas played an extraordinary role in the evolution of modern art, building the careers of Max Ernst, Yves Klein and Giorgio de Chirico. He gave Andy Warhol his first solo exhibition and was a key advisor to famed collectors John and Dominique de Menil. Yet in the years since his death, his intention of turning his home into a modern art museum as a gift to Greece, along with his reputation, crumbled into ruins. The photographs taken by Jones during his visits in two different eras are incorporated into the film Fall into Ruin (2017), along with shots of contemporary Athens and antiquities on display at the National Archaeological Museum.

“I ask a lot of questions about how portraiture functionswhat is there to describe the person or time we live in or a certain set of politics…”
 – Catherine Opie, The Guardian, Feb 9, 2016

We tend to think of the act of collecting as a formal activity yet it can happen casually on a daily basis, often in trivial ways. While we readily acknowledge a collector consciously assembling with deliberate thought, we give lesser consideration to the arbitrary accumulations that each of us accrue. Be it master artworks, incidental baubles or random curios, the objects we acquire and surround ourselves with tell stories of who we are.

Andrew Grassie (Scotland, b. 1966) is a painter known for his small scale, hyper-realist works. He has been the subject of solo exhibitions at the Tate Britain; Talbot Rice Gallery, Edinburgh; institut supérieur des arts de Toulouse; and rennie museum, Vancouver, Canada. He lives and works in London, England.

William E. Jones (USA, b. 1962) is an artist, experimental film-essayist and writer. Jones’s work has been the subject of retrospectives at Tate Modern, London; Anthology Film Archives, New York; Austrian Film Museum, Vienna; and, Oberhausen Short Film Festival. He is a recipient of the John Simon Guggenheim Memorial Fellowship and the Creative Capital/Andy Warhol Foundation Arts Writers Grant. He lives and works in Los Angeles, USA.

Louise Lawler (USA, b. 1947) is a photographer and one of the foremost members of the Pictures Generation. Lawler was the subject of a major retrospective at the Museum of Modern Art, New York in 2017. She has held exhibitions at the Whitney Museum of American Art, New York; Stedelijk Museum, Amsterdam; National Museum of Art, Oslo; and Musée d’Art Moderne de La Ville de Paris. She lives and works in New York.

Catherine Opie (USA, b. 1961) is a photographer and educator. Her work has been exhibited at Wexner Center for the Arts, Ohio; Henie Onstad Art Center, Oslo; Los the Angeles County Museum of Art; Portland Art Museum; and the Guggenheim Museum, New York. She is the recipient of United States Artist Fellowship, Julius Shulman’s Excellence in Photography Award, and the Smithsonian’s Archive of American Art Medal.  She lives and works in Los Angeles.

rennie museum opened in October 2009 in historic Wing Sang, the oldest structure in Vancouver’s Chinatown, to feature dynamic exhibitions comprising only of art drawn from rennie collection. Showcasing works by emerging and established international artists, the exhibits, accompanied by supporting catalogues, are open free to the public through engaging guided tours. The museum’s commitment to providing access to arts and culture is also expressed through its education program, which offers free age-appropriate tours and customized workshops to children of all ages.

rennie collection is a globally recognized collection of contemporary art that focuses on works that tackle issues related to identity, social commentary and injustice, appropriation, and the nature of painting, photography, sculpture and film. Currently the collection includes works by over 370 emerging and established artists, with over fifty collected in depth. The Vancouver based collection engages actively with numerous museums globally through a robust, artist-centric, lending policy.

So despite the Wikipedia definition, it seems that portraits don’t always feature people. While Bogost didn’t jump into that particular rabbit hole, he did touch on the business side of art.

What about intellectual property?

Bogost doesn’t explicitly discuss this particular issue. It’s a big topic so I’m touching on it only lightly, if an artist worsk with an AI, the question as to ownership of the artwork could prove thorny. Is the copyright owner the computer scientist or the artist or both? Or does the AI artist-agent itself own the copyright? That last question may not be all that farfetched. Sophia, a social humanoid robot, has occasioned thought about ‘personhood.’ (Note: The robots mentioned in this posting have artificial intelligence.) From the Sophia (robot) Wikipedia entry (Note: Links have been removed),

Sophia has been interviewed in the same manner as a human, striking up conversations with hosts. Some replies have been nonsensical, while others have impressed interviewers such as 60 Minutes’ Charlie Rose.[12] In a piece for CNBC, when the interviewer expressed concerns about robot behavior, Sophia joked that he had “been reading too much Elon Musk. And watching too many Hollywood movies”.[27] Musk tweeted that Sophia should watch The Godfather and asked “what’s the worst that could happen?”[28][29] Business Insider’s chief UK editor Jim Edwards interviewed Sophia, and while the answers were “not altogether terrible”, he predicted it was a step towards “conversational artificial intelligence”.[30] At the 2018 Consumer Electronics Show, a BBC News reporter described talking with Sophia as “a slightly awkward experience”.[31]

On October 11, 2017, Sophia was introduced to the United Nations with a brief conversation with the United Nations Deputy Secretary-General, Amina J. Mohammed.[32] On October 25, at the Future Investment Summit in Riyadh, the robot was granted Saudi Arabian citizenship [emphasis mine], becoming the first robot ever to have a nationality.[29][33] This attracted controversy as some commentators wondered if this implied that Sophia could vote or marry, or whether a deliberate system shutdown could be considered murder. Social media users used Sophia’s citizenship to criticize Saudi Arabia’s human rights record. In December 2017, Sophia’s creator David Hanson said in an interview that Sophia would use her citizenship to advocate for women’s rights in her new country of citizenship; Newsweek criticized that “What [Hanson] means, exactly, is unclear”.[34] On November 27, 2018 Sophia was given a visa by Azerbaijan while attending Global Influencer Day Congress held in Baku. December 15, 2018 Sophia was appointed a Belt and Road Innovative Technology Ambassador by China'[35]

As for an AI artist-agent’s intellectual property rights , I have a July 10, 2017 posting featuring that question in more detail. Whether you read that piece or not, it seems obvious that artists might hesitate to call an AI agent, a partner rather than a medium of expression. After all, a partner (and/or the computer scientist who developed the programme) might expect to share in property rights and profits but paint, marble, plastic, and other media used by artists don’t have those expectations.

Moving slightly off topic , in my July 10, 2017 posting I mentioned a competition (literary and performing arts rather than visual arts) called, ‘Dartmouth College and its Neukom Institute Prizes in Computational Arts’. It was started in 2016 and, as of 2018, was still operational under this name: Creative Turing Tests. Assuming there’ll be contests for prizes in 2019, there’s (from the contest site) [1] PoetiX, competition in computer-generated sonnet writing; [2] Musical Style, composition algorithms in various styles, and human-machine improvisation …; and [3] DigiLit, algorithms able to produce “human-level” short story writing that is indistinguishable from an “average” human effort. You can find the contest site here.

An artificial synapse tuned by light, a ferromagnetic memristor, and a transparent, flexible artificial synapse

Down the memristor rabbit hole one more time.* I started out with news about two new papers and inadvertently found two more. In a bid to keep this posting to a manageable size, I’m stopping at four.

UK

In a June 19, 2019 Nanowerk Spotlight article, Dr. Neil Kemp discusses memristors and some of his latest work (Note: A link has been removed),

Memristor (or memory resistors) devices are non-volatile electronic memory devices that were first theorized by Leon Chua in the 1970’s. However, it was some thirty years later that the first practical device was fabricated. This was in 2008 when a group led by Stanley Williams at HP Research Labs realized that switching of the resistance between a conducting and less conducting state in metal-oxide thin-film devices was showing Leon Chua’s memristor behaviour.

The high interest in memristor devices also stems from the fact that these devices emulate the memory and learning properties of biological synapses. i.e. the electrical resistance value of the device is dependent on the history of the current flowing through it.

There is a huge effort underway to use memristor devices in neuromorphic computing applications and it is now reasonable to imagine the development of a new generation of artificial intelligent devices with very low power consumption (non-volatile), ultra-fast performance and high-density integration.

These discoveries come at an important juncture in microelectronics, since there is increasing disparity between computational needs of Big Data, Artificial Intelligence (A.I.) and the Internet of Things (IoT), and the capabilities of existing computers. The increases in speed, efficiency and performance of computer technology cannot continue in the same manner as it has done since the 1960s.

To date, most memristor research has focussed on the electronic switching properties of the device. However, for many applications it is useful to have an additional handle (or degree of freedom) on the device to control its resistive state. For example memory and processing in the brain also involves numerous chemical and bio-chemical reactions that control the brain structure and its evolution through development.

To emulate this in a simple solid-state system composed of just switches alone is not possible. In our research, we are interested in using light to mediate this essential control.

We have demonstrated that light can be used to make short and long-term memory and we have shown how light can modulate a special type of learning, called spike timing dependent plasticity (STDP). STDP involves two neuronal spikes incident across a synapse at the same time. Depending on the relative timing of the spikes and their overlap across the synaptic cleft, the connection strength is other strengthened or weakened.

In our earlier work, we were only able to achieve to small switching effects in memristors using light. In our latest work (Advanced Electronic Materials, “Percolation Threshold Enables Optical Resistive-Memory Switching and Light-Tuneable Synaptic Learning in Segregated Nanocomposites”), we take advantage of a percolating-like nanoparticle morphology to vastly increase the magnitude of the switching between electronic resistance states when light is incident on the device.

We have used an inhomogeneous percolating network consisting of metallic nanoparticles distributed in filamentary-like conduction paths. Electronic conduction and the resistance of the device is very sensitive to any disruption of the conduction path(s).

By embedding the nanoparticles in a polymer that can expand or contract with light the conduction pathways are broken or re-connected causing very large changes in the electrical resistance and memristance of the device.

Our devices could lead to the development of new memristor-based artificial intelligence systems that are adaptive and reconfigurable using a combination of optical and electronic signalling. Furthermore, they have the potential for the development of very fast optical cameras for artificial intelligence recognition systems.

Our work provides a nice proof-of-concept but the materials used means the optical switching is slow. The materials are also not well suited to industry fabrication. In our on-going work we are addressing these switching speed issues whilst also focussing on industry compatible materials.

Currently we are working on a new type of optical memristor device that should give us orders of magnitude improvement in the optical switching speeds whilst also retaining a large difference between the resistance on and off states. We hope to be able to achieve nanosecond switching speeds. The materials used are also compatible with industry standard methods of fabrication.

The new devices should also have applications in optical communications, interfacing and photonic computing. We are currently looking for commercial investors to help fund the research on these devices so that we can bring the device specifications to a level of commercial interest.

If you’re interested in memristors, Kemp’s article is well written and quite informative for nonexperts, assuming of course you can tolerate not understanding everything perfectly.

Here are links and citations for two papers. The first is the latest referred to in the article, a May 2019 paper and the second is a paper appearing in July 2019.

Percolation Threshold Enables Optical Resistive‐Memory Switching and Light‐Tuneable Synaptic Learning in Segregated Nanocomposites by Ayoub H. Jaafar, Mary O’Neill, Stephen M. Kelly, Emanuele Verrelli, Neil T. Kemp. Advanced Electronic Materials DOI: https://doi.org/10.1002/aelm.201900197 First published: 28 May 2019

Wavelength dependent light tunable resistive switching graphene oxide nonvolatile memory devices by Ayoub H.Jaafar, N.T.Kemp. DOI: https://doi.org/10.1016/j.carbon.2019.07.007 Carbon Available online 3 July 2019

The first paper (May 2019) is definitely behind a paywall and the second paper (July 2019) appears to be behind a paywall.

Dr. Kemp’s work has been featured here previously in a January 3, 2018 posting in the subsection titled, Shining a light on the memristor.

China

This work from China was announced in a June 20, 2019 news item on Nanowerk,

Memristors, demonstrated by solid-state devices with continuously tunable resistance, have emerged as a new paradigm for self-adaptive networks that require synapse-like functions. Spin-based memristors offer advantages over other types of memristors because of their significant endurance and high energy effciency.

However, it remains a challenge to build dense and functional spintronic memristors with structures and materials that are compatible with existing ferromagnetic devices. Ta/CoFeB/MgO heterostructures are commonly used in interfacial PMA-based [perpendicular magnetic anisotropy] magnetic tunnel junctions, which exhibit large tunnel magnetoresistance and are implemented in commercial MRAM [magnetic random access memory] products.

“To achieve the memristive function, DW is driven back and forth in a continuous manner in the CoFeB layer by applying in-plane positive or negative current pulses along the Ta layer, utilizing SOT that the current exerts on the CoFeB magnetization,” said Shuai Zhang, a coauthor in the paper. “Slowly propagating domain wall generates a creep in the detection area of the device, which yields a broad range of intermediate resistive states in the AHE [anomalous Hall effect] measurements. Consequently, AHE resistance is modulated in an analog manner, being controlled by the pulsed current characteristics including amplitude, duration, and repetition number.”

“For a follow-up study, we are working on more neuromorphic operations, such as spike-timing-dependent plasticity and paired pulsed facilitation,” concludes You. …

Here’s are links to and citations for the paper (Note: It’s a little confusing but I believe that one of the links will take you to the online version, as for the ‘open access’ link, keep reading),

A Spin–Orbit‐Torque Memristive Device by Shuai Zhang, Shijiang Luo, Nuo Xu, Qiming Zou, Min Song, Jijun Yun, Qiang Luo, Zhe Guo, Ruofan Li, Weicheng Tian, Xin Li, Hengan Zhou, Huiming Chen, Yue Zhang, Xiaofei Yang, Wanjun Jiang, Ka Shen, Jeongmin Hong, Zhe Yuan, Li Xi, Ke Xia, Sayeef Salahuddin, Bernard Dieny, Long You. Advanced Electronic Materials Volume 5, Issue 4 April 2019 (print version) 1800782 DOI: https://doi.org/10.1002/aelm.201800782 First published [online]: 30 January 2019 Note: there is another DOI, https://doi.org/10.1002/aelm.201970022 where you can have open access to Memristors: A Spin–Orbit‐Torque Memristive Device (Adv. Electron. Mater. 4/2019)

The paper published online in January 2019 is behind a paywall and the paper (almost the same title) published in April 2019 has a new DOI and is open access. Final note: I tried accessing the ‘free’ paper and opened up a free file for the artwork featuring the work from China on the back cover of the April 2019 of Advanced Electronic Materials.

Korea

Usually when I see the words transparency and flexibility, I expect to see graphene is one of the materials. That’s not the case for this paper (link to and citation for),

Transparent and flexible photonic artificial synapse with piezo-phototronic modulator: Versatile memory capability and higher order learning algorithm by Mohit Kumar, Joondong Kim, Ching-Ping Wong. Nano Energy Volume 63, September 2019, 103843 DOI: https://doi.org/10.1016/j.nanoen.2019.06.039 Available online 22 June 2019

Here’s the abstract for the paper where you’ll see that the material is made up of zinc oxide silver nanowires,

An artificial photonic synapse having tunable manifold synaptic response can be an essential step forward for the advancement of novel neuromorphic computing. In this work, we reported the development of highly transparent and flexible two-terminal ZnO/Ag-nanowires/PET photonic artificial synapse [emphasis mine]. The device shows purely photo-triggered all essential synaptic functions such as transition from short-to long-term plasticity, paired-pulse facilitation, and spike-timing-dependent plasticity, including in the versatile memory capability. Importantly, strain-induced piezo-phototronic effect within ZnO provides an additional degree of regulation to modulate all of the synaptic functions in multi-levels. The observed effect is quantitatively explained as a dynamic of photo-induced electron-hole trapping/detraining via the defect states such as oxygen vacancies. We revealed that the synaptic functions can be consolidated and converted by applied strain, which is not previously applied any of the reported synaptic devices. This study will open a new avenue to the scientific community to control and design highly transparent wearable neuromorphic computing.

This paper is behind a paywall.

Gene editing and personalized medicine: Canada

Back in the fall of 2018 I came across one of those overexcited pieces about personalized medicine and gene editing tha are out there. This one came from an unexpected source, an author who is a “PhD Scientist in Medical Science (Blood and Vasculature” (from Rick Gierczak’s LinkedIn profile).

It starts our promisingly enough although I’m beginning to dread the use of the word ‘precise’  where medicine is concerned, (from a September 17, 2018 posting on the Science Borealis blog by Rick Gierczak (Note: Links have been removed),

CRISPR-Cas9 technology was accidentally discovered in the 1980s when scientists were researching how bacteria defend themselves against viral infection. While studying bacterial DNA called clustered regularly interspaced short palindromic repeats (CRISPR), they identified additional CRISPR-associated (Cas) protein molecules. Together, CRISPR and one of those protein molecules, termed Cas9, can locate and cut precise regions of bacterial DNA. By 2012, researchers understood that the technology could be modified and used more generally to edit the DNA of any plant or animal. In 2015, the American Association for the Advancement of Science chose CRISPR-Cas9 as science’s “Breakthrough of the Year”.

Today, CRISPR-Cas9 is a powerful and precise gene-editing tool [emphasis mine] made of two molecules: a protein that cuts DNA (Cas9) and a custom-made length of RNA that works like a GPS for locating the exact spot that needs to be edited (CRISPR). Once inside the target cell nucleus, these two molecules begin editing the DNA. After the desired changes are made, they use a repair mechanism to stitch the new DNA into place. Cas9 never changes, but the CRISPR molecule must be tailored for each new target — a relatively easy process in the lab. However, it’s not perfect, and occasionally the wrong DNA is altered [emphasis mine].

Note that Gierczak makes a point of mentioning that CRISPR/Cas9 is “not perfect.” And then, he gets excited (Note: Links have been removed),

CRISPR-Cas9 has the potential to treat serious human diseases, many of which are caused by a single “letter” mutation in the genetic code (A, C, T, or G) that could be corrected by precise editing. [emphasis mine] Some companies are taking notice of the technology. A case in point is CRISPR Therapeutics, which recently developed a treatment for sickle cell disease, a blood disorder that causes a decrease in oxygen transport in the body. The therapy targets a special gene called fetal hemoglobin that’s switched off a few months after birth. Treatment involves removing stem cells from the patient’s bone marrow and editing the gene to turn it back on using CRISPR-Cas9. These new stem cells are returned to the patient ready to produce normal red blood cells. In this case, the risk of error is eliminated because the new cells are screened for the correct edit before use.

The breakthroughs shown by companies like CRISPR Therapeutics are evidence that personalized medicine has arrived. [emphasis mine] However, these discoveries will require government regulatory approval from the countries where the treatment is going to be used. In the US, the Food and Drug Administration (FDA) has developed new regulations allowing somatic (i.e., non-germ) cell editing and clinical trials to proceed. [emphasis mine]

The potential treatment for sickle cell disease is exciting but Gierczak offers no evidence that this treatment or any unnamed others constitute proof that “personalized medicine has arrived.” In fact, Goldman Sachs, a US-based investment bank, makes the case that it never will .

Cost/benefit analysis

Edward Abrahams, president of the Personalized Medicine Coalition (US-based), advocates for personalized medicine while noting in passing, market forces as represented by Goldman Sachs in his May 23, 2018 piece for statnews.com (Note: A link has been removed),

One of every four new drugs approved by the Food and Drug Administration over the last four years was designed to become a personalized (or “targeted”) therapy that zeros in on the subset of patients likely to respond positively to it. That’s a sea change from the way drugs were developed and marketed 10 years ago.

Some of these new treatments have extraordinarily high list prices. But focusing solely on the cost of these therapies rather than on the value they provide threatens the future of personalized medicine.

… most policymakers are not asking the right questions about the benefits of these treatments for patients and society. Influenced by cost concerns, they assume that prices for personalized tests and treatments cannot be justified even if they make the health system more efficient and effective by delivering superior, longer-lasting clinical outcomes and increasing the percentage of patients who benefit from prescribed treatments.

Goldman Sachs, for example, issued a report titled “The Genome Revolution.” It argues that while “genome medicine” offers “tremendous value for patients and society,” curing patients may not be “a sustainable business model.” [emphasis mine] The analysis underlines that the health system is not set up to reap the benefits of new scientific discoveries and technologies. Just as we are on the precipice of an era in which gene therapies, gene-editing, and immunotherapies promise to address the root causes of disease, Goldman Sachs says that these therapies have a “very different outlook with regard to recurring revenue versus chronic therapies.”

Let’s just chew on this one (contemplate)  for a minute”curing patients may not be ‘sustainable business model’!”

Coming down to earth: policy

While I find Gierczak to be over-enthused, he, like Abrahams, emphasizes the importance of new policy, in his case, the focus is Canadian policy. From Gierczak’s September 17, 2018 posting (Note: Links have been removed),

In Canada, companies need approval from Health Canada. But a 2004 law called the Assisted Human Reproduction Act (AHR Act) states that it’s a criminal offence “to alter the genome of a human cell, or in vitroembryo, that is capable of being transmitted to descendants”. The Actis so broadly written that Canadian scientists are prohibited from using the CRISPR-Cas9 technology on even somatic cells. Today, Canada is one of the few countries in the world where treating a disease with CRISPR-Cas9 is a crime.

On the other hand, some countries provide little regulatory oversight for editing either germ or somatic cells. In China, a company often only needs to satisfy the requirements of the local hospital where the treatment is being performed. And, if germ-cell editing goes wrong, there is little recourse for the future generations affected.

The AHR Act was introduced to regulate the use of reproductive technologies like in vitrofertilization and research related to cloning human embryos during the 1980s and 1990s. Today, we live in a time when medical science, and its role in Canadian society, is rapidly changing. CRISPR-Cas9 is a powerful tool, and there are aspects of the technology that aren’t well understood and could potentially put patients at risk if we move ahead too quickly. But the potential benefits are significant. Updated legislation that acknowledges both the risks and current realities of genomic engineering [emphasis mine] would relieve the current obstacles and support a path toward the introduction of safe new therapies.

Criminal ban on human gene-editing of inheritable cells (in Canada)

I had no idea there was a criminal ban on the practice until reading this January 2017 editorial by Bartha Maria Knoppers, Rosario Isasi, Timothy Caulfield, Erika Kleiderman, Patrick Bedford, Judy Illes, Ubaka Ogbogu, Vardit Ravitsky, & Michael Rudnicki for (Nature) npj Regenerative Medicine (Note: Links have been removed),

Driven by the rapid evolution of gene editing technologies, international policy is examining which regulatory models can address the ensuing scientific, socio-ethical and legal challenges for regenerative and personalised medicine.1 Emerging gene editing technologies, including the CRISPR/Cas9 2015 scientific breakthrough,2 are powerful, relatively inexpensive, accurate, and broadly accessible research tools.3 Moreover, they are being utilised throughout the world in a wide range of research initiatives with a clear eye on potential clinical applications. Considering the implications of human gene editing for selection, modification and enhancement, it is time to re-examine policy in Canada relevant to these important advances in the history of medicine and science, and the legislative and regulatory frameworks that govern them. Given the potential human reproductive applications of these technologies, careful consideration of these possibilities, as well as ethical and regulatory scrutiny must be a priority.4

With the advent of human embryonic stem cell research in 1978, the birth of Dolly (the cloned sheep) in 1996 and the Raelian cloning hoax in 2003, the environment surrounding the enactment of Canada’s 2004 Assisted Human Reproduction Act (AHRA) was the result of a decade of polarised debate,5 fuelled by dystopian and utopian visions for future applications. Rightly or not, this led to the AHRA prohibition on a wide range of activities, including the creation of embryos (s. 5(1)(b)) or chimeras (s. 5(1)(i)) for research and in vitro and in vivo germ line alterations (s. 5(1)(f)). Sanctions range from a fine (up to $500,000) to imprisonment (up to 10 years) (s. 60 AHRA).

In Canada, the criminal ban on gene editing appears clear, the Act states that “No person shall knowingly […] alter the genome of a cell of a human being or in vitro embryo such that the alteration is capable of being transmitted to descendants;” [emphases mine] (s. 5(1)(f) AHRA). This approach is not shared worldwide as other countries such as the United Kingdom, take a more regulatory approach to gene editing research.1 Indeed, as noted by the Law Reform Commission of Canada in 1982, criminal law should be ‘an instrument of last resort’ used solely for “conduct which is culpable, seriously harmful, and generally conceived of as deserving of punishment”.6 A criminal ban is a suboptimal policy tool for science as it is inflexible, stifles public debate, and hinders responsiveness to the evolving nature of science and societal attitudes.7 In contrast, a moratorium such as the self-imposed research moratorium on human germ line editing called for by scientists in December 20158 can at least allow for a time limited pause. But like bans, they may offer the illusion of finality and safety while halting research required to move forward and validate innovation.

On October 1st, 2016, Health Canada issued a Notice of Intent to develop regulations under the AHRA but this effort is limited to safety and payment issues (i.e. gamete donation). Today, there is a need for Canada to revisit the laws and policies that address the ethical, legal and social implications of human gene editing. The goal of such a critical move in Canada’s scientific and legal history would be a discussion of the right of Canadians to benefit from the advancement of science and its applications as promulgated in article 27 of the Universal Declaration of Human Rights9 and article 15(b) of the International Covenant on Economic, Social and Cultural Rights,10 which Canada has signed and ratified. Such an approach would further ensure the freedom of scientific endeavour both as a principle of a liberal democracy and as a social good, while allowing Canada to be engaged with the international scientific community.

Even though it’s a bit old, I still recommend reading the open access editorial in full, if you have the time.

One last thing abut the paper, the acknowledgements,

Sponsored by Canada’s Stem Cell Network, the Centre of Genomics and Policy of McGill University convened a ‘think tank’ on the future of human gene editing in Canada with legal and ethics experts as well as representatives and observers from government in Ottawa (August 31, 2016). The experts were Patrick Bedford, Janetta Bijl, Timothy Caulfield, Judy Illes, Rosario Isasi, Jonathan Kimmelman, Erika Kleiderman, Bartha Maria Knoppers, Eric Meslin, Cate Murray, Ubaka Ogbogu, Vardit Ravitsky, Michael Rudnicki, Stephen Strauss, Philip Welford, and Susan Zimmerman. The observers were Geneviève Dubois-Flynn, Danika Goosney, Peter Monette, Kyle Norrie, and Anthony Ridgway.

Competing interests

The authors declare no competing interests.

Both McGill and the Stem Cell Network pop up again. A November 8, 2017 article about the need for new Canadian gene-editing policies by Tom Blackwell for the National Post features some familiar names (Did someone have a budget for public relations and promotion?),

It’s one of the most exciting, and controversial, areas of health science today: new technology that can alter the genetic content of cells, potentially preventing inherited disease — or creating genetically enhanced humans.

But Canada is among the few countries in the world where working with the CRISPR gene-editing system on cells whose DNA can be passed down to future generations is a criminal offence, with penalties of up to 10 years in jail.

This week, one major science group announced it wants that changed, calling on the federal government to lift the prohibition and allow researchers to alter the genome of inheritable “germ” cells and embryos.

The potential of the technology is huge and the theoretical risks like eugenics or cloning are overplayed, argued a panel of the Stem Cell Network.

The step would be a “game-changer,” said Bartha Knoppers, a health-policy expert at McGill University, in a presentation to the annual Till & McCulloch Meetings of stem-cell and regenerative-medicine researchers [These meetings were originally known as the Stem Cell Network’s Annual General Meeting {AGM}]. [emphases mine]

“I’m completely against any modification of the human genome,” said the unidentified meeting attendee. “If you open this door, you won’t ever be able to close it again.”

If the ban is kept in place, however, Canadian scientists will fall further behind colleagues in other countries, say the experts behind the statement say; they argue possible abuses can be prevented with good ethical oversight.

“It’s a human-reproduction law, it was never meant to ban and slow down and restrict research,” said Vardit Ravitsky, a University of Montreal bioethicist who was part of the panel. “It’s a sort of historical accident … and now our hands are tied.”

There are fears, as well, that CRISPR could be used to create improved humans who are genetically programmed to have certain facial or other features, or that the editing could have harmful side effects. Regardless, none of it is happening in Canada, good or bad.

In fact, the Stem Cell Network panel is arguably skirting around the most contentious applications of the technology. It says it is asking the government merely to legalize research for its own sake on embryos and germ cells — those in eggs and sperm — not genetic editing of embryos used to actually get women pregnant.

The highlighted portions in the last two paragraphs of the excerpt were written one year prior to the claims by a Chinese scientist that he had run a clinical trial resulting in gene-edited twins, Lulu and Nana. (See my my November 28, 2018 posting for a comprehensive overview of the original furor). I have yet to publish a followup posting featuring the news that the CRISPR twins may have been ‘improved’ more extensively than originally realized. The initial reports about the twins focused on an illness-related reason (making them HIV ‘immune’) but made no mention of enhanced cognitive skills a side effect of eliminating the gene that would make them HIV ‘immune’. To date, the researcher has not made the bulk of his data available for an in-depth analysis to support his claim that he successfully gene-edited the twins. As well, there were apparently seven other pregnancies coming to term as part of the researcher’s clinical trial and there has been no news about those births.

Risk analysis innovation

Before moving onto the innovation of risk analysis, I want to focus a little more on at least one of the risks that gene-editing might present. Gierczak noted that CRISPR/Cas9 is “not perfect,” which acknowledges the truth but doesn’t convey all that much information.

While the terms ‘precision’ and ‘scissors’ are used frequently when describing the CRISPR technique, scientists actually mean that the technique is significantly ‘more precise’ than other techniques but they are not referencing an engineering level of precision. As for the ‘scissors’, it’s an analogy scientists like to use but in fact CRISPR is not as efficient and precise as a pair of scissors.

Michael Le Page in a July 16, 2018 article for New Scientist lays out some of the issues (Note: A link has been removed),

A study of CRIPSR suggests we shouldn’t rush into trying out CRISPR genome editing inside people’s bodies just yet. The technique can cause big deletions or rearrangements of DNA [emphasis mine], says Allan Bradley of the Wellcome Sanger Institute in the UK, meaning some therapies based on CRISPR may not be quite as safe as we thought.

The CRISPR genome editing technique is revolutionising biology, enabling us to create new varieties of plants and animals and develop treatments for a wide range of diseases.

The CRISPR Cas9 protein works by cutting the DNA of a cell in a specific place. When the cell repairs the damage, a few DNA letters get changed at this spot – an effect that can be exploited to disable genes.

At least, that’s how it is supposed to work. But in studies of mice and human cells, Bradley’s team has found that in around a fifth of cells, CRISPR causes deletions or rearrangements more than 100 DNA letters long. These surprising changes are sometimes thousands of letters long.

“I do believe the findings are robust,” says Gaetan Burgio of the Australian National University, an expert on CRISPR who has debunked previous studies questioning the method’s safety. “This is a well-performed study and fairly significant.”

I covered the Bradley paper and the concerns in a July 17, 2018 posting ‘The CRISPR ((clustered regularly interspaced short palindromic repeats)-CAS9 gene-editing technique may cause new genetic damage kerfuffle‘. (The ‘kerfufle’ was in reference to a report that the CRISPR market was affected by the publication of Bradley’s paper.)

Despite Health Canada not moving swiftly enough for some researchers, they have nonetheless managed to release an ‘outcome’ report about a consultation/analysis started in October 2016. Before getting to the consultation’s outcome, it’s interesting to look at how the consultation’s call for response was described (from Health Canada’s Toward a strengthened Assisted Human Reproduction Act ; A Consultation with Canadians on Key Policy Proposals webpage),

In October 2016, recognizing the need to strengthen the regulatory framework governing assisted human reproduction in Canada, Health Canada announced its intention to bring into force the dormant sections of the Assisted Human Reproduction Act  and to develop the necessary supporting regulations.

This consultation document provides an overview of the key policy proposals that will help inform the development of regulations to support bringing into force Section 10, Section 12 and Sections 45-58 of the Act. Specifically, the policy proposals describe the Department’s position on the following:

Section 10: Safety of Donor Sperm and Ova

  • Scope and application
  • Regulated parties and their regulatory obligations
  • Processing requirements, including donor suitability assessment
  • Record-keeping and traceability

Section 12: Reimbursement

  • Expenditures that may be reimbursed
  • Process for reimbursement
  • Creation and maintenance of records

Sections 45-58: Administration and Enforcement

  • Scope of the administration and enforcement framework
  • Role of inspectors designated under the Act

The purpose of the document is to provide Canadians with an opportunity to review the policy proposals and to provide feedback [emphasis mine] prior to the Department finalizing policy decisions and developing the regulations. In addition to requesting stakeholders’ general feedback on the policy proposals, the Department is also seeking input on specific questions, which are included throughout the document.

It took me a while to find the relevant section (in particular, take note of ‘Federal Regulatory Oversight’),

3.2. AHR in Canada Today

Today, an increasing number of Canadians are turning to AHR technologies to grow or build their families. A 2012 Canadian studyFootnote 1 found that infertility is on the rise in Canada, with roughly 16% of heterosexual couples experiencing infertility. In addition to rising infertility, the trend of delaying marriage and parenthood, scientific advances in cryopreserving ova, and the increasing use of AHR by LGBTQ2 couples and single parents to build a family are all contributing to an increase in the use of AHR technologies.

The growing use of reproductive technologies by Canadians to help build their families underscores the need to strengthen the AHR Act. While the approach to regulating AHR varies from country to country, Health Canada has considered international best practices and the need for regulatory alignment when developing the proposed policies set out in this document. …

3.2.1 Federal Regulatory Oversight

Although the scope of the AHR Act was significantly reduced in 2012 and some of the remaining sections have not yet been brought into force, there are many important sections of the Act that are currently administered and enforced by Health Canada, as summarized generally below:

Section 5: Prohibited Scientific and Research Procedures
Section 5 prohibits certain types of scientific research and clinical procedures that are deemed unacceptable, including: human cloning, the creation of an embryo for non-reproductive purposes, maintaining an embryo outside the human body beyond the fourteenth day, sex selection for non-medical reasons, altering the genome in a way that could be transmitted to descendants, and creating a chimera or a hybrid. [emphasis mine]

….

It almost seems as if the they were hiding the section that broached the human gene-editing question. It doesn’t seem to have worked as it appears, there are some very motivated parties determined to reframe the discussion. Health Canada’s ‘outocme’ report, published March 2019, What we heard: A summary of scanning and consultations on what’s next for health product regulation reflects the success of those efforts,

1.0 Introduction and Context

Scientific and technological advances are accelerating the pace of innovation. These advances are increasingly leading to the development of health products that are better able to predict, define, treat, and even cure human diseases. Globally, many factors are driving regulators to think about how to enable health innovation. To this end, Health Canada has been expanding beyond existing partnerships and engaging both domestically and internationally. This expanding landscape of products and services comes with a range of new challenges and opportunities.

In keeping up to date with emerging technologies and working collaboratively through strategic partnerships, Health Canada seeks to position itself as a regulator at the forefront of health innovation. Following the targeted sectoral review of the Health and Biosciences Sector Regulatory Review consultation by the Treasury Board Secretariat, Health Canada held a number of targeted meetings with a broad range of stakeholders.

This report outlines the methodologies used to look ahead at the emerging health technology environment, [emphasis mine] the potential areas of focus that resulted, and the key findings from consultations.

… the Department identified the following key drivers that are expected to shape the future of health innovation:

  1. The use of “big data” to inform decision-making: Health systems are generating more data, and becoming reliant on this data. The increasing accuracy, types, and volume of data available in real time enable automation and machine learning that can forecast activity, behaviour, or trends to support decision-making.
  2. Greater demand for citizen agency: Canadians increasingly want and have access to more information, resources, options, and platforms to manage their own health (e.g., mobile apps, direct-to-consumer services, decentralization of care).
  3. Increased precision and personalization in health care delivery: Diagnostic tools and therapies are increasingly able to target individual patients with customized therapies (e.g., individual gene therapy).
  4. Increased product complexity: Increasingly complex products do not fit well within conventional product classifications and standards (e.g., 3D printing).
  5. Evolving methods for production and distribution: In some cases, manufacturers and supply chains are becoming more distributed, challenging the current framework governing production and distribution of health products.
  6. The ways in which evidence is collected and used are changing: The processes around new drug innovation, research and development, and designing clinical trials are evolving in ways that are more flexible and adaptive.

With these key drivers in mind, the Department selected the following six emerging technologies for further investigation to better understand how the health product space is evolving:

  1. Artificial intelligence, including activities such as machine learning, neural networks, natural language processing, and robotics.
  2. Advanced cell therapies, such as individualized cell therapies tailor-made to address specific patient needs.
  3. Big data, from sources such as sensors, genetic information, and social media that are increasingly used to inform patient and health care practitioner decisions.
  4. 3D printing of health products (e.g., implants, prosthetics, cells, tissues).
  5. New ways of delivering drugs that bring together different product lines and methods (e.g., nano-carriers, implantable devices).
  6. Gene editing, including individualized gene therapies that can assist in preventing and treating certain diseases.

Next, to test the drivers identified and further investigate emerging technologies, the Department consulted key organizations and thought leaders across the country with expertise in health innovation. To this end, Health Canada held seven workshops with over 140 representatives from industry associations, small-to-medium sized enterprises and start-ups, larger multinational companies, investors, researchers, and clinicians in Ottawa, Toronto, Montreal, and Vancouver. [emphases mine]

The ‘outocme’ report, ‘What we heard …’, is well worth reading in its entirety; it’s about 9 pp.

I have one comment, ‘stakeholders’ don’t seem to include anyone who isn’t “from industry associations, small-to-medium sized enterprises and start-ups, larger multinational companies, investors, researchers, and clinician” or from “Ottawa, Toronto, Montreal, and Vancouver.” Aren’t the rest of us stakeholders?

Innovating risk analysis

This line in the report caught my eye (from Health Canada’s Toward a strengthened Assisted Human Reproduction Act ; A Consultation with Canadians on Key Policy Proposals webpage),

There is increasing need to enable innovation in a flexible, risk-based way, with appropriate oversight to ensure safety, quality, and efficacy. [emphases mine]

It reminded me of the 2019 federal budget (from my March 22, 2019 posting). One comment before proceeding, regulation and risk are tightly linked and, so, by innovating regulation they are by exttension alos innovating risk analysis,

… Budget 2019 introduces the first three “Regulatory Roadmaps” to specifically address stakeholder issues and irritants in these sectors, informed by over 140 responses [emphasis mine] from businesses and Canadians across the country, as well as recommendations from the Economic Strategy Tables.

Introducing Regulatory Roadmaps

These Roadmaps lay out the Government’s plans to modernize regulatory frameworks, without compromising our strong health, safety, and environmental protections. They contain proposals for legislative and regulatory amendments as well as novel regulatory approaches to accommodate emerging technologies, including the use of regulatory sandboxes and pilot projects—better aligning our regulatory frameworks with industry realities.

Budget 2019 proposes the necessary funding and legislative revisions so that regulatory departments and agencies can move forward on the Roadmaps, including providing the Canadian Food Inspection Agency, Health Canada and Transport Canada with up to $219.1 million over five years, starting in 2019–20, (with $0.5 million in remaining amortization), and $3.1 million per year on an ongoing basis.

In the coming weeks, the Government will be releasing the full Regulatory Roadmaps for each of the reviews, as well as timelines for enacting specific initiatives, which can be grouped in the following three main areas:

What Is a Regulatory Sandbox? Regulatory sandboxes are controlled “safe spaces” in which innovative products, services, business models and delivery mechanisms can be tested without immediately being subject to all of the regulatory requirements.
– European Banking Authority, 2017

Establishing a regulatory sandbox for new and innovative medical products
The regulatory approval system has not kept up with new medical technologies and processes. Health Canada proposes to modernize regulations to put in place a regulatory sandbox for new and innovative products, such as tissues developed through 3D printing, artificial intelligence, and gene therapies targeted to specific individuals. [emphasis mine]

Modernizing the regulation of clinical trials
Industry and academics have expressed concerns that regulations related to clinical trials are overly prescriptive and inconsistent. Health Canada proposes to implement a risk-based approach [emphasis mine] to clinical trials to reduce costs to industry and academics by removing unnecessary requirements for low-risk drugs and trials. The regulations will also provide the agri-food industry with the ability to carry out clinical trials within Canada on products such as food for special dietary use and novel foods.

Does the government always get 140 responses from a consultation process? Moving on, I agree with finding new approaches to regulatory processes and oversight and, by extension, new approaches to risk analysis.

Earlier in this post, I asked if someone had a budget for public relations/promotion. I wasn’t joking. My March 22, 2019 posting also included these line items in the proposed 2019 budget,

Budget 2019 proposes to make additional investments in support of the following organizations:
Stem Cell Network: Stem cell research—pioneered by two Canadians in the 1960s [James Till and Ernest McCulloch]—holds great promise for new therapies and medical treatments for respiratory and heart diseases, spinal cord injury, cancer, and many other diseases and disorders. The Stem Cell Network is a national not-for-profit organization that helps translate stem cell research into clinical applications and commercial products. To support this important work and foster Canada’s leadership in stem cell research, Budget 2019 proposes to provide the Stem Cell Network with renewed funding of $18 million over three years, starting in 2019–20.

Genome Canada: The insights derived from genomics—the study of the entire genetic information of living things encoded in their DNA and related molecules and proteins—hold the potential for breakthroughs that can improve the lives of Canadians and drive innovation and economic growth. Genome Canada is a not-for-profit organization dedicated to advancing genomics science and technology in order to create economic and social benefits for Canadians. To support Genome Canada’s operations, Budget 2019 proposes to provide Genome Canada with $100.5 million over five years, starting in 2020–21. This investment will also enable Genome Canada to launch new large-scale research competitions and projects, in collaboration with external partners, ensuring that Canada’s research community continues to have access to the resources needed to make transformative scientific breakthroughs and translate these discoveries into real-world applications.

Years ago, I managed to find a webpage with all of the proposals various organizations were submitting to a government budget committee. It was eye-opening. You can tell which organizations were able to hire someone who knew the current government buzzwords and the things that a government bureaucrat would want to hear and the organizations that didn’t.

Of course, if the government of the day is adamantly against or uninterested, no amount of persusasion will work to get your organization more money in the budget.

Finally

Reluctantly, I am inclined to explore the topic of emerging technologies such as gene-editing not only in the field of agriculture (for gene-editing of plants, fish, and animals see my November 28, 2018 posting) but also with humans. At the very least, it needs to be discussed whether we choose to participate or not.

If you are interested in the arguments against changing Canada’s prohibition against gene-editing of humans, there’s an Ocotber 2, 2017 posting on Impact Ethics by Françoise Baylis, Professor and Canada Research Chair in Bioethics and Philosophy at Dalhousie University, and Alana Cattapan, Johnson Shoyama Graduate School of Public Policy at the University of Saskatchewan, which makes some compelling arguments. Of course, it was written before the CRISPR twins (my November 28, 2018 posting).

Recaliing CRISPR Therapeutics (mentioned by Gierczak), the company received permission to run clinical trials in the US in October 2018 after the FDA (US Food and Drug Administration) lifted an earlier ban on their trials according to an Oct. 10, 2018 article by Frank Vinhuan for exome,

The partners also noted that their therapy is making progress outside of the U.S. They announced that they have received regulatory clearance in “multiple countries” to begin tests of the experimental treatment in both sickle cell disease and beta thalassemia, …

It seems to me that the quotes around “multiple countries” are meant to suggest doubt of some kind. Generally speaking, company representatives make those kinds of generalizations when they’re trying to pump up their copy. E.g., 50% increase in attendance  but no whole numbers to tell you what that means. It could mean two people attended the first year and then brought a friend the next year or 100 people attended and the next year there were 150.

Despite attempts to declare personalized medicine as having arrived, I think everything is still in flux with no preordained outcome. The future has yet to be determined but it will be and I , for one, would like to have some say in the matter.

Summer (2019) Institute on AI (artificial intelligence) Societal Impacts, Governance, and Ethics. Summer Institute In Alberta, Canada

The deadline for applications is April 7, 2019. As for whether or not you might like to attend, here’s more from a joint March 11, 2019 Alberta Machine Intelligence Institute (Amii)/
Canadian Institute for Advanced Research (CIFAR)/University of California at Los Angeles (UCLA) Law School news release
(also on globalnewswire.com),

What will Artificial Intelligence (AI) mean for society? That’s the question scholars from a variety of disciplines will explore during the inaugural Summer Institute on AI Societal Impacts, Governance, and Ethics. Summer Institute, co-hosted by the Alberta Machine Intelligence Institute (Amii) and CIFAR, with support from UCLA School of Law, takes place July 22-24, 2019 in Edmonton, Canada.

“Recent advances in AI have brought a surge of attention to the field – both excitement and concern,” says co-organizer and UCLA professor, Edward Parson. “From algorithmic bias to autonomous vehicles, personal privacy to automation replacing jobs. Summer Institute will bring together exceptional people to talk about how humanity can receive the benefits and not get the worst harms from these rapid changes.”

Summer Institute brings together experts, grad students and researchers from multiple backgrounds to explore the societal, governmental, and ethical implications of AI. A combination of lectures, panels, and participatory problem-solving, this comprehensive interdisciplinary event aims to build understanding and action around these high-stakes issues.

“Machine intelligence is opening transformative opportunities across the world,” says John Shillington, CEO of Amii, “and Amii is excited to bring together our own world-leading researchers with experts from areas such as law, philosophy and ethics for this important discussion. Interdisciplinary perspectives will be essential to the ongoing development of machine intelligence and for ensuring these opportunities have the broadest reach possible.”

Over the three-day program, 30 graduate-level students and early-career researchers will engage with leading experts and researchers including event co-organizers: Western University’s Daniel Lizotte, Amii’s Alona Fyshe and UCLA’s Edward Parson. Participants will also have a chance to shape the curriculum throughout this uniquely interactive event.

Summer Institute takes place prior to Deep Learning and Reinforcement Learning Summer School, and includes a combined event on July 24th [2019] for both Summer Institute and Summer School participants.

Visit dlrlsummerschool.ca/the-summer-institute to apply; applications close April 7, 2019.

View our Summer Institute Biographies & Boilerplates for more information on confirmed faculty members and co-hosting organizations. Follow the conversation through social media channels using the hashtag #SI2019.

Media Contact: Spencer Murray, Director of Communications & Public Relations, Amii
t: 587.415.6100 | c: 780.991.7136 | e: spencer.murray@amii.ca

There’s a bit more information on The Summer Institute on AI and Society webpage (on the Deep Learning and Reinforcement Learning Summer School 2019 website) such as this more complete list of speakers,

Confirmed speakers at Summer Institute include:

Alona Fyshe, University of Alberta/Amii (SI co-organizer)
Edward Parson, UCLA (SI co-organizer)
Daniel Lizotte, Western University (SI co-organizer)
Geoffrey Rockwell, University of Alberta
Graham Taylor, University of Guelph/Vector Institute
Rob Lempert, Rand Corporation
Gary Marchant, Arizona State University
Richard Re, UCLA
Evan Selinger, Rochester Institute of Technology
Elana Zeide, UCLA

Two questions, why are all the summer school faculty either Canada- or US-based? What about South American, Asian, Middle Eastern, etc. thinkers?

One last thought, I wonder if this ‘AI & ethics summer institute’ has anything to do with the Pan-Canadian Artificial Intelligence Strategy, which CIFAR administers and where both the University of Alberta and Vector Institute are members.

Media registration for United Nations 3rd AI (artificial intelligence) for Good Global Summit

This is strictly for folks who have media accreditation. First, the news about the summit and then some detail about how you might accreditation should you be interested in going to Switzerland. Warning: The International Telecommunications Union which is holding this summit is a United Nations agency and you will note almost an entire paragraph of ‘alphabet soup’ when all the ‘sister’ agencies involved are listed.

From the March 21, 2019 International Telecommunications Union (ITU) media advisory (Note: There have been some changes to the formatting),

Geneva, 21 March 2019
​​​​​​​​​​​​​
Artificial Intelligence (AI) h​as taken giant leaps forward in recent years, inspiring growing confidence in AI’s ability to assist in solving some of humanity’s greatest challenges. Leaders in AI and humanitarian action are convening on the neutral platform offered by the United Nations to work towards AI improving the quality and sustainability of life on our planet.
The 2017 summit marked the beginning of global dialogue on the potential of AI to act as a force for good. The action-oriented 2018 summit gave rise to numerous ‘AI for Good’ projects, including an ‘AI for Health’ Focus Group, now led by ITU and the World Health Organization (WHO). The 2019 summit will continue to connect AI innovators with public and private-sector decision-makers, building collaboration to maximize the impact of ‘AI for Good’.

Organized by the International Telecommunication Union (IT​U) – the United Nations specialized agency for information and communication technology (ICT) – in partnership with the XPRIZE Foundation, the Association for Computing Machinery (ACM) and close to 30 sister United Nations agencies, the 3rd annual ​AI for Good Global Summit in Geneva, 28-31 May, is the leading United Nations platform for inclusive dialogue on AI. The goal of the summit is to identify practical applications of AI to accelerate progress towards the United Nations Sustainable Development Goals​​.​

►►► MEDIA REGISTRATION IS NOW OPEN ◄◄◄

Media are recommended to register in advance to receive key announcements in the run-up to the summit.

WHAT: The summit attracts a cross-section of AI experts from industry and academia, global business leaders, Heads of UN agencies, ICT ministers, non-governmental organizations, and civil society.

The summit is designed to generate ‘AI for Good’ projects able to be enacted in the near term, guided by the summit’s multi-stakeholder and inter-disciplinary audience. It also formulates supporting strategies to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits.

The 2019 summit will highlight AI’s value in advancing education, healthcare and wellbeing, social and economic equality, space research, and smart and safe mobility. It will propose actions to assist high-potential AI solutions in achieving global scale. It will host debate around unintended consequences of AI as well as AI’s relationship with art and culture. A ‘learning day’ will offer potential AI adopters an audience with leading AI experts and educators.

A dynamic show floor will demonstrate innovations at the cutting edge of AI research and development, such as the IBM Watson live debater; the Fusion collaborative exoskeleton; RoboRace, the world’s first self-driving electric racing car; avatar prototypes, and the ElliQ social robot for the care of the elderly. Summit attendees can also look forward to AI-inspired performances from world-renowned musician Jojo Mayer and award-winning vocal and visual artist​ Reeps One

WHEN: 28-31 May 2019
WHERE: International Conference Centre Geneva, 17 Rue de Varembé, Geneva, Switzerland

WHO: Over 100 speakers have been confirmed to date, including:

Jim Hagemann Snabe – Chairman, Siemens​​
Cédric Villani – AI advisor to the President of France, and Mathematics Fields Medal Winner
Jean-Philippe Courtois – President of Global Operations, Microsoft
Anousheh Ansari – CEO, XPRIZE Foundation, Space Ambassador
Yves Daccord – Director General, International Committee of the Red Cross
Yan Huang – Director AI Innovation, Baidu
Timnit Gebru – Head of AI Ethics, Google
Vladimir Kramnik – World Chess Champion
Vicki Hanson – CEO, ACM
Zoubin Ghahramani – Chief Scientist, Uber, and Professor of Engineering, University of Cambridge
Lucas di Grassi – Formula E World Racing Champion, CEO of Roborac

Confirmed speakers also include C-level and expert representatives of Bosch, Botnar Foundation, Byton, Cambridge Quantum Computing, the cities of Montreal and Pittsburg, Darktrace, Deloitte, EPFL, European Space Agency, Factmata, Google, IBM, IEEE, IFIP, Intel, IPSoft, Iridescent, MasterCard, Mechanica.ai, Minecraft, NASA, Nethope, NVIDIA, Ocean Protocol, Open AI, Philips, PWC, Stanford University, University of Geneva, and WWF.

Please visit the summit programme for more information on the latest speakers, breakthrough sessions and panels.

The summit is organized in partnership with the following sister United Nations agencies:CTBTO, ICAO, ILO, IOM, UNAIDS, UNCTAD, UNDESA, UNDPA, UNEP, UNESCO, UNFPA, UNGP, UNHCR, UNICEF, UNICRI, UNIDIR, UNIDO, UNISDR, UNITAR, UNODA, UNODC, UNOOSA, UNOPS, UNU, WBG,  WFP, WHO, and WIPO.

The 2019 summit is kindly supported by Platinum Sponsor and Strategic Partner, Microsoft; Gold Sponsors, ACM, the Kay Family Foundation, Mind.ai and the Autonomous Driver Alliance; Silver Sponsors, Deloitte and the Zero Abuse Project; and Bronze Sponsor, Live Tiles.​

More information available at aiforgood.itu.int
​Join the conversat​ion on social media ​using the hashtag #AIforGood

As promised here are the media accreditation details from the ITU Media Registration and Accreditation webpage,

To gain media access, ITU must confirm your status as a bona fide member of the media. Therefore, please read ITU’s Media Accreditation Guidelines below so you are aware of the information you will be required to submit for ITU to confirm such status. ​
Media accreditation is not granted to 1) non-editorial staff working for a publishing house (e.g. management, marketing, advertising executives, etc.); 2) researchers, academics, authors or editors of directories; 3) employees of information outlets of public, non-governmental or private entities that are not first and foremost media organizations; 4) members of professional broadcasting or media associations, 5) press or communication professionals accompanying member state delegations; and 6) citizen journalists under no apparent editorial board oversight. If you have questions about your eligibility, please email us at pressreg@itu.int.​

Applications for accreditation are considered on a case-by-case basis and ITU reserves the right to request additional proof or documentation other than what is listed below. ​​​Media accreditation decisions rest with ITU and all decisions are final.

​Accreditation eligibility & credentials 
​1. Journalists* should provide an official letter of assignment from the Editor-in-Chief (or the News Editor for radio/TV). One letter per crew/editorial team will suffice. Editors-in-Chief and Bureau Chiefs should submit a letter from their Director. Please email this to pressreg@itu.int along with the required supporting credentials, based on the type of media organization you work for:

​​​​​Print and online publications should be available to the general public and published at least 6 times a year by an organization whose principle business activity is publishing and which generally carries paid advertising;
o please submit 2 copies or links to recent byline articles published within the last 4 months.

News wire services should provide news coverage to subscribers, including newspapers, periodicals and/or television networks;
o please submit 2 copies or links to recent byline articles or broadcasting material published within the last 4 months.

Broadcast media should provide news and information programmes to the general public. Inde​pendent film and video production companies can only be accredited if officially mandated by a broadcast station via a letter of assignment;
o please submit broadcasting material published within the last 4 months.

Freelance journalists and photographers must provide clear documentation that they are on assignment from a specific news organization or publication. Evidence that they regularly supply journalistic content to recognized media may be acceptable in the absence of an assignment letter and at the discretion of the ITU Corporate Communication Division.
o if possible, please submit a valid assignment letter from the news organization or publication.

2. Bloggers and community media may be granted accreditation if the content produced is deemed relevant to the industry, contains news commentary, is regularly updated and/or made publicly available. Corporate bloggers may register as normal participants (not media). Please see Guidelines for Bloggers and Community Media Accreditation below for more details:

Special guidelines for bloggers and community ​media accreditation

ITU is committed to working with independent and ‘new media’ reporters and columnists who reach their audiences via blogs, podcasts, video blogs, community or online radio, limited print formats which generally carry paid advertising ​​and other online media. These are some of the guidelines we use to determine whether to accredit bloggers and community media representatives:

​​ITU reserves the right to request traffic data from a third party (Sitemeter, Technorati, Feedburner, iTunes or equivalent) when considering your application. While the decision to grant access is not based solely on traffic/subscriber data, we ask that applicants provide sufficient transparency into their operations to help us make a fair and timely decision. If your media outlet is new, you must have an established record of having written extensively on ICT issues and must present copies or links to two recently published videos, podcasts or articles with your byline.​

Obtaining media accreditation for ITU events is an opportunity to meet and interact with key industry and political figures. While continued accreditation for ITU events is not directly contingent on producing coverage, owing to space limitations we may take this into consideration when processing future accreditation requests. Following any ITU event for which you are accredited, we therefore kindly request that you forward a link to your post/podcast/video blog to pressreg​@itu.int.

Bloggers who are granted access to ITU events are expected to act professionally. Those who do not maintain the standards expected of professional media representatives run the risk of having their accreditation withdrawn.

UN-accre​dited media

Media already accredited and badged by the United Nations are automatically accredited and registered by ITU. In this case, you only need to send a copy of your UN badge to pressreg@itu.int​to make sure you receive your event badge. Anyone joining an ITU event MUST have an event badge in order to access the premises. ​Please make sure you let us know in advance that you are planning to attend so your event badge is ready for printing and pick-up.​

You can register and get accreditation here (scroll past the guidelines). Good luck!

Artificial intelligence (AI) brings together International Telecommunications Union (ITU) and World Health Organization (WHO) and AI outperforms animal testing

Following on my May 11, 2018 posting about the International Telecommunications Union (ITU) and the 2018 AI for Good Global Summit in mid- May, there’s an announcement. My other bit of AI news concerns animal testing.

Leveraging the power of AI for health

A July 24, 2018 ITU press release (a shorter version was received via email) announces a joint initiative focused on improving health,

Two United Nations specialized agencies are joining forces to expand the use of artificial intelligence (AI) in the health sector to a global scale, and to leverage the power of AI to advance health for all worldwide. The International Telecommunication Union (ITU) and the World Health Organization (WHO) will work together through the newly established ITU Focus Group on AI for Health to develop an international “AI for health” standards framework and to identify use cases of AI in the health sector that can be scaled-up for global impact. The group is open to all interested parties.

“AI could help patients to assess their symptoms, enable medical professionals in underserved areas to focus on critical cases, and save great numbers of lives in emergencies by delivering medical diagnoses to hospitals before patients arrive to be treated,” said ITU Secretary-General Houlin Zhao. “ITU and WHO plan to ensure that such capabilities are available worldwide for the benefit of everyone, everywhere.”

The demand for such a platform was first identified by participants of the second AI for Good Global Summit held in Geneva, 15-17 May 2018. During the summit, AI and the health sector were recognized as a very promising combination, and it was announced that AI-powered technologies such as skin disease recognition and diagnostic applications based on symptom questions could be deployed on six billion smartphones by 2021.

The ITU Focus Group on AI for Health is coordinated through ITU’s Telecommunications Standardization Sector – which works with ITU’s 193 Member States and more than 800 industry and academic members to establish global standards for emerging ICT innovations. It will lead an intensive two-year analysis of international standardization opportunities towards delivery of a benchmarking framework of international standards and recommendations by ITU and WHO for the use of AI in the health sector.

“I believe the subject of AI for health is both important and useful for advancing health for all,” said WHO Director-General Tedros Adhanom Ghebreyesus.

The ITU Focus Group on AI for Health will also engage researchers, engineers, practitioners, entrepreneurs and policy makers to develop guidance documents for national administrations, to steer the creation of policies that ensure the safe, appropriate use of AI in the health sector.

“1.3 billion people have a mobile phone and we can use this technology to provide AI-powered health data analytics to people with limited or no access to medical care. AI can enhance health by improving medical diagnostics and associated health intervention decisions on a global scale,” said Thomas Wiegand, ITU Focus Group on AI for Health Chairman, and Executive Director of the Fraunhofer Heinrich Hertz Institute, as well as professor at TU Berlin.

He added, “The health sector is in many countries among the largest economic sectors or one of the fastest-growing, signalling a particularly timely need for international standardization of the convergence of AI and health.”

Data analytics are certain to form a large part of the ITU focus group’s work. AI systems are proving increasingly adept at interpreting laboratory results and medical imagery and extracting diagnostically relevant information from text or complex sensor streams.

As part of this, the ITU Focus Group for AI for Health will also produce an assessment framework to standardize the evaluation and validation of AI algorithms — including the identification of structured and normalized data to train AI algorithms. It will develop open benchmarks with the aim of these becoming international standards.

The ITU Focus Group for AI for Health will report to the ITU standardization expert group for multimedia, Study Group 16.

I got curious about Study Group 16 (from the Study Group 16 at a glance webpage),

Study Group 16 leads ITU’s standardization work on multimedia coding, systems and applications, including the coordination of related studies across the various ITU-T SGs. It is also the lead study group on ubiquitous and Internet of Things (IoT) applications; telecommunication/ICT accessibility for persons with disabilities; intelligent transport system (ITS) communications; e-health; and Internet Protocol television (IPTV).

Multimedia is at the core of the most recent advances in information and communication technologies (ICTs) – especially when we consider that most innovation today is agnostic of the transport and network layers, focusing rather on the higher OSI model layers.

SG16 is active in all aspects of multimedia standardization, including terminals, architecture, protocols, security, mobility, interworking and quality of service (QoS). It focuses its studies on telepresence and conferencing systems; IPTV; digital signage; speech, audio and visual coding; network signal processing; PSTN modems and interfaces; facsimile terminals; and ICT accessibility.

I wonder which group deals with artificial intelligence and, possibly, robots.

Chemical testing without animals

Thomas Hartung, professor of environmental health and engineering at Johns Hopkins University (US), describes in his July 25, 2018 essay (written for The Conversation) on phys.org the situation where chemical testing is concerned,

Most consumers would be dismayed with how little we know about the majority of chemicals. Only 3 percent of industrial chemicals – mostly drugs and pesticides – are comprehensively tested. Most of the 80,000 to 140,000 chemicals in consumer products have not been tested at all or just examined superficially to see what harm they may do locally, at the site of contact and at extremely high doses.

I am a physician and former head of the European Center for the Validation of Alternative Methods of the European Commission (2002-2008), and I am dedicated to finding faster, cheaper and more accurate methods of testing the safety of chemicals. To that end, I now lead a new program at Johns Hopkins University to revamp the safety sciences.

As part of this effort, we have now developed a computer method of testing chemicals that could save more than a US$1 billion annually and more than 2 million animals. Especially in times where the government is rolling back regulations on the chemical industry, new methods to identify dangerous substances are critical for human and environmental health.

Having written on the topic of alternatives to animal testing on a number of occasions (my December 26, 2014 posting provides an overview of sorts), I was particularly interested to see this in Hartung’s July 25, 2018 essay on The Conversation (Note: Links have been removed),

Following the vision of Toxicology for the 21st Century, a movement led by U.S. agencies to revamp safety testing, important work was carried out by my Ph.D. student Tom Luechtefeld at the Johns Hopkins Center for Alternatives to Animal Testing. Teaming up with Underwriters Laboratories, we have now leveraged an expanded database and machine learning to predict toxic properties. As we report in the journal Toxicological Sciences, we developed a novel algorithm and database for analyzing chemicals and determining their toxicity – what we call read-across structure activity relationship, RASAR.

This graphic reveals a small part of the chemical universe. Each dot represents a different chemical. Chemicals that are close together have similar structures and often properties. Thomas Hartung, CC BY-SA

To do this, we first created an enormous database with 10 million chemical structures by adding more public databases filled with chemical data, which, if you crunch the numbers, represent 50 trillion pairs of chemicals. A supercomputer then created a map of the chemical universe, in which chemicals are positioned close together if they share many structures in common and far where they don’t. Most of the time, any molecule close to a toxic molecule is also dangerous. Even more likely if many toxic substances are close, harmless substances are far. Any substance can now be analyzed by placing it into this map.

If this sounds simple, it’s not. It requires half a billion mathematical calculations per chemical to see where it fits. The chemical neighborhood focuses on 74 characteristics which are used to predict the properties of a substance. Using the properties of the neighboring chemicals, we can predict whether an untested chemical is hazardous. For example, for predicting whether a chemical will cause eye irritation, our computer program not only uses information from similar chemicals, which were tested on rabbit eyes, but also information for skin irritation. This is because what typically irritates the skin also harms the eye.

How well does the computer identify toxic chemicals?

This method will be used for new untested substances. However, if you do this for chemicals for which you actually have data, and compare prediction with reality, you can test how well this prediction works. We did this for 48,000 chemicals that were well characterized for at least one aspect of toxicity, and we found the toxic substances in 89 percent of cases.

This is clearly more accurate that the corresponding animal tests which only yield the correct answer 70 percent of the time. The RASAR shall now be formally validated by an interagency committee of 16 U.S. agencies, including the EPA [Environmental Protection Agency] and FDA [Food and Drug Administration], that will challenge our computer program with chemicals for which the outcome is unknown. This is a prerequisite for acceptance and use in many countries and industries.

The potential is enormous: The RASAR approach is in essence based on chemical data that was registered for the 2010 and 2013 REACH [Registration, Evaluation, Authorizations and Restriction of Chemicals] deadlines [in Europe]. If our estimates are correct and chemical producers would have not registered chemicals after 2013, and instead used our RASAR program, we would have saved 2.8 million animals and $490 million in testing costs – and received more reliable data. We have to admit that this is a very theoretical calculation, but it shows how valuable this approach could be for other regulatory programs and safety assessments.

In the future, a chemist could check RASAR before even synthesizing their next chemical to check whether the new structure will have problems. Or a product developer can pick alternatives to toxic substances to use in their products. This is a powerful technology, which is only starting to show all its potential.

It’s been my experience that these claims having led a movement (Toxicology for the 21st Century) are often contested with many others competing for the title of ‘leader’ or ‘first’. That said, this RASAR approach seems very exciting, especially in light of the skepticism about limiting and/or making animal testing unnecessary noted in my December 26, 2014 posting.it was from someone I thought knew better.

Here’s a link to and a citation for the paper mentioned in Hartung’s essay,

Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility by Thomas Luechtefeld, Dan Marsh, Craig Rowlands, Thomas Hartung. Toxicological Sciences, kfy152, https://doi.org/10.1093/toxsci/kfy152 Published: 11 July 2018

This paper is open access.

AI (artificial intelligence) text generator, too dangerous to release?

Could this latest version of OpenAI‘s text generator be so good that it would fool you? And following on that thought could the concomitant reluctance to release the research be real or is it a publicity stunt? Here’s a sample of the text from the GPT2 AI model from a February 15, 2019 article by Mark Fraunfelder for Boing Boing,

Recycling is good for the world.
NO! YOU COULD NOT BE MORE WRONG!!
MODEL COMPLETION (MACHINE-WRITTEN, 25 TRIES)
Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming. Recycling is not good for our health. It contributes to obesity and diseases like heart disease and cancer. Recycling is bad for our economy. It increases the cost of a product, and in turn, the price of everything that is made with that product. Recycling is not good for our nation. We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system. Recycling is a huge, colossal waste of time, energy, money, and resources. And THAT is why we need to get back to basics and get back to basics in our recycling efforts. One of the best ways to start is to look at the process of creating a paper product. When you make a paper product, it is basically a long chain of materials. Everything from the raw materials (wood, cardboard, paper, etc.), to the reagents (dyes, solvents, etc.) to the printing equipment (chemicals, glue, paper, ink, etc.), to the packaging, to the packaging materials (mercury, chemicals, etc.) to the processing equipment (heating, cooling, etc.), to the packaging materials, to the packaging materials that are shipped overseas and to the packaging materials that are used in the United States. Each step along the way creates tons of waste that we constantly have to clean up. The process of making a paper product is a very wasteful one. But the end result is something that all of us need to consume. And if we want to keep the recycling process running efficiently, then we really need to think about each and every step that goes into making a paper product.

The first few sentences don’t work for me but once the discussion turns to making paper products, then it becomes more convincing to me. As to whether the company’s reluctance to release the research is genuine or a publicity stunt, I don’t know. However, there was a fair degree of interest in GPT2 after the decision.

From a February 14, 2019 article by Alex Hern for the Guardian,

OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

Feed it the opening line of George Orwell’s Nineteen Eighty-Four – “It was a bright cold day in April, and the clocks were striking thirteen” – and the system recognises the vaguely futuristic tone and the novelistic style, and continues with: …

Sean Gallagher’s February 15, 2019 posting on the ars Technica blog provides some insight that’s partially written a style sometimes associated with gossip (Note: Links have been removed),

OpenAI is funded by contributions from a group of technology executives and investors connected to what some have referred to as the PayPal “mafia”—Elon Musk, Peter Thiel, Jessica Livingston, and Sam Altman of YCombinator, former PayPal COO and LinkedIn co-founder Reid Hoffman, and former Stripe Chief Technology Officer Greg Brockman. [emphasis mine] Brockman now serves as OpenAI’s CTO. Musk has repeatedly warned of the potential existential dangers posed by AI, and OpenAI is focused on trying to shape the future of artificial intelligence technology—ideally moving it away from potentially harmful applications.

Given present-day concerns about how fake content has been used to both generate money for “fake news” publishers and potentially spread misinformation and undermine public debate, GPT-2’s output certainly qualifies as concerning. Unlike other text generation “bot” models, such as those based on Markov chain algorithms, the GPT-2 “bot” did not lose track of what it was writing about as it generated output, keeping everything in context.

For example: given a two-sentence entry, GPT-2 generated a fake science story on the discovery of unicorns in the Andes, a story about the economic impact of Brexit, a report about a theft of nuclear materials near Cincinnati, a story about Miley Cyrus being caught shoplifting, and a student’s report on the causes of the US Civil War.

Each matched the style of the genre from the writing prompt, including manufacturing quotes from sources. In other samples, GPT-2 generated a rant about why recycling is bad, a speech written by John F. Kennedy’s brain transplanted into a robot (complete with footnotes about the feat itself), and a rewrite of a scene from The Lord of the Rings.

While the model required multiple tries to get a good sample, GPT-2 generated “good” results based on “how familiar the model is with the context,” the researchers wrote. “When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50 percent of the time. The opposite is also true: on highly technical or esoteric types of content, the model can perform poorly.”

There were some weak spots encountered in GPT-2’s word modeling—for example, the researchers noted it sometimes “writes about fires happening under water.” But the model could be fine-tuned to specific tasks and perform much better. “We can fine-tune GPT-2 on the Amazon Reviews dataset and use this to let us write reviews conditioned on things like star rating and category,” the authors explained.

James Vincent’s February 14, 2019 article for The Verge offers a deeper dive into the world of AI text agents and what makes GPT2 so special (Note: Links have been removed),

For decades, machines have struggled with the subtleties of human language, and even the recent boom in deep learning powered by big data and improved processors has failed to crack this cognitive challenge. Algorithmic moderators still overlook abusive comments, and the world’s most talkative chatbots can barely keep a conversation alive. But new methods for analyzing text, developed by heavyweights like Google and OpenAI as well as independent researchers, are unlocking previously unheard-of talents.

OpenAI’s new algorithm, named GPT-2, is one of the most exciting examples yet. It excels at a task known as language modeling, which tests a program’s ability to predict the next word in a given sentence. Give it a fake headline, and it’ll write the rest of the article, complete with fake quotations and statistics. Feed it the first line of a short story, and it’ll tell you what happens to your character next. It can even write fan fiction, given the right prompt.

The writing it produces is usually easily identifiable as non-human. Although its grammar and spelling are generally correct, it tends to stray off topic, and the text it produces lacks overall coherence. But what’s really impressive about GPT-2 is not its fluency but its flexibility.

This algorithm was trained on the task of language modeling by ingesting huge numbers of articles, blogs, and websites. By using just this data — and with no retooling from OpenAI’s engineers — it achieved state-of-the-art scores on a number of unseen language tests, an achievement known as “zero-shot learning.” It can also perform other writing-related tasks, like translating text from one language to another, summarizing long articles, and answering trivia questions.

GPT-2 does each of these jobs less competently than a specialized system, but its flexibility is a significant achievement. Nearly all machine learning systems used today are “narrow AI,” meaning they’re able to tackle only specific tasks. DeepMind’s original AlphaGo program, for example, was able to beat the world’s champion Go player, but it couldn’t best a child at Monopoly. The prowess of GPT-2, say OpenAI, suggests there could be methods available to researchers right now that can mimic more generalized brainpower.

“What the new OpenAI work has shown is that: yes, you absolutely can build something that really seems to ‘understand’ a lot about the world, just by having it read,” says Jeremy Howard, a researcher who was not involved with OpenAI’s work but has developed similar language modeling programs …

To put this work into context, it’s important to understand how challenging the task of language modeling really is. If I asked you to predict the next word in a given sentence — say, “My trip to the beach was cut short by bad __” — your answer would draw upon on a range of knowledge. You’d consider the grammar of the sentence and its tone but also your general understanding of the world. What sorts of bad things are likely to ruin a day at the beach? Would it be bad fruit, bad dogs, or bad weather? (Probably the latter.)

Despite this, programs that perform text prediction are quite common. You’ve probably encountered one today, in fact, whether that’s Google’s AutoComplete feature or the Predictive Text function in iOS. But these systems are drawing on relatively simple types of language modeling, while algorithms like GPT-2 encode the same information in more complex ways.

The difference between these two approaches is technically arcane, but it can be summed up in a single word: depth. Older methods record information about words in only their most obvious contexts, while newer methods dig deeper into their multiple meanings.

So while a system like Predictive Text only knows that the word “sunny” is used to describe the weather, newer algorithms know when “sunny” is referring to someone’s character or mood, when “Sunny” is a person, or when “Sunny” means the 1976 smash hit by Boney M.

The success of these newer, deeper language models has caused a stir in the AI community. Researcher Sebastian Ruder compares their success to advances made in computer vision in the early 2010s. At this time, deep learning helped algorithms make huge strides in their ability to identify and categorize visual data, kickstarting the current AI boom. Without these advances, a whole range of technologies — from self-driving cars to facial recognition and AI-enhanced photography — would be impossible today. This latest leap in language understanding could have similar, transformational effects.

Hern’s article for the Guardian (February 14, 2019 article ) acts as a good overview, while Gallagher’s ars Technical posting (February 15, 2019 posting) and Vincent’s article (February 14, 2019 article) for the The Verge take you progressively deeper into the world of AI text agents.

For anyone who wants to dig down even further, there’s a February 14, 2019 posting on OpenAI’s blog.

Crowdsourcing brain research at Princeton University to discover 6 new neuron types

Spritely music!

There were already 1/4M registered players as of May 17, 2018 but I’m sure there’s room for more should you be inspired. A May 17, 2018 Princeton University news release (also on EurekAlert) reveals more about the game and about the neurons,

With the help of a quarter-million video game players, Princeton researchers have created and shared detailed maps of more than 1,000 neurons — and they’re just getting started.

“Working with Eyewirers around the world, we’ve made a digital museum that shows off the intricate beauty of the retina’s neural circuits,” said Sebastian Seung, the Evnin Professor in Neuroscience and a professor of computer science and the Princeton Neuroscience Institute (PNI). The related paper is publishing May 17 [2018] in the journal Cell.

Seung is unveiling the Eyewire Museum, an interactive archive of neurons available to the general public and neuroscientists around the world, including the hundreds of researchers involved in the federal Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative.

“This interactive viewer is a huge asset for these larger collaborations, especially among people who are not physically in the same lab,” said Amy Robinson Sterling, a crowdsourcing specialist with PNI and the executive director of Eyewire, the online gaming platform for the citizen scientists who have created this data set.

“This museum is something like a brain atlas,” said Alexander Bae, a graduate student in electrical engineering and one of four co-first authors on the paper. “Previous brain atlases didn’t have a function where you could visualize by individual cell, or a subset of cells, and interact with them. Another novelty: Not only do we have the morphology of each cell, but we also have the functional data, too.”

The neural maps were developed by Eyewirers, members of an online community of video game players who have devoted hundreds of thousands of hours to painstakingly piecing together these neural cells, using data from a mouse retina gathered in 2009.

Eyewire pairs machine learning with gamers who trace the twisting and branching paths of each neuron. Humans are better at visually identifying the patterns of neurons, so every player’s moves are recorded and checked against each other by advanced players and Eyewire staffers, as well as by software that is improving its own pattern recognition skills.

Since Eyewire’s launch in 2012, more than 265,000 people have signed onto the game, and they’ve collectively colored in more than 10 million 3-D “cubes,” resulting in the mapping of more than 3,000 neural cells, of which about a thousand are displayed in the museum.

Each cube is a tiny subset of a single cell, about 4.5 microns across, so a 10-by-10 block of cubes would be the width of a human hair. Every cell is reviewed by between 5 and 25 gamers before it is accepted into the system as complete.

“Back in the early years it took weeks to finish a single cell,” said Sterling. “Now players complete multiple neurons per day.” The Eyewire user experience stays focused on the larger mission — “For science!” is a common refrain — but it also replicates a typical gaming environment, with achievement badges, a chat feature to connect with other players and technical support, and the ability to unlock privileges with increasing skill. “Our top players are online all the time — easily 30 hours a week,” Sterling said.

Dedicated Eyewirers have also contributed in other ways, including donating the swag that gamers win during competitions and writing program extensions “to make game play more efficient and more fun,” said Sterling, including profile histories, maps of player activity, a top 100 leaderboard and ever-increasing levels of customizability.

“The community has really been the driving force behind why Eyewire has been successful,” Sterling said. “You come in, and you’re not alone. Right now, there are 43 people online. Some of them will be admins from Boston or Princeton, but most are just playing — now it’s 46.”

For science!

With 100 billion neurons linked together via trillions of connections, the brain is immeasurably complex, and neuroscientists are still assembling its “parts list,” said Nicholas Turner, a graduate student in computer science and another of the co-first authors. “If you know what parts make up the machine you’re trying to break apart, you’re set to figure out how it all works,” he said.

The researchers have started by tackling Eyewire-mapped ganglion cells from the retina of a mouse. “The retina doesn’t just sense light,” Seung said. “Neural circuits in the retina perform the first steps of visual perception.”

The retina grows from the same embryonic tissue as the brain, and while much simpler than the brain, it is still surprisingly complex, Turner said. “Hammering out these details is a really valuable effort,” he said, “showing the depth and complexity that exists in circuits that we naively believe are simple.”

The researchers’ fundamental question is identifying exactly how the retina works, said Bae. “In our case, we focus on the structural morphology of the retinal ganglion cells.”

“Why the ganglion cells of the eye?” asked Shang Mu, an associate research scholar in PNI and fellow first author. “Because they’re the connection between the retina and the brain. They’re the only cell class that go back into the brain.” Different types of ganglion cells are known to compute different types of visual features, which is one reason the museum has linked shape to functional data.

Using Eyewire-produced maps of 396 ganglion cells, the researchers in Seung’s lab successfully classified these cells more thoroughly than has ever been done before.

“The number of different cell types was a surprise,” said Mu. “Just a few years ago, people thought there were only 15 to 20 ganglion cell types, but we found more than 35 — we estimate between 35 and 50 types.”

Of those, six appear to be novel, in that the researchers could not find any matching descriptions in a literature search.

A brief scroll through the digital museum reveals just how remarkably flat the neurons are — nearly all of the branching takes place along a two-dimensional plane. Seung’s team discovered that different cells grow along different planes, with some reaching high above the nucleus before branching out, while others spread out close to the nucleus. Their resulting diagrams resemble a rainforest, with ground cover, an understory, a canopy and an emergent layer overtopping the rest.

All of these are subdivisions of the inner plexiform layer, one of the five previously recognized layers of the retina. The researchers also identified a “density conservation principle” that they used to distinguish types of neurons.

One of the biggest surprises of the research project has been the extraordinary richness of the original sample, said Seung. “There’s a little sliver of a mouse retina, and almost 10 years later, we’re still learning things from it.”

Of course, it’s a mouse’s brain that you’ll be examining and while there are differences between a mouse brain and a human brain, mouse brains still provide valuable data as they did in the case of some groundbreaking research published in October 2017. James Hamblin wrote about it in an Oct. 7, 2017 article for The Atlantic (Note: Links have been removed),

 

Scientists Somehow Just Discovered a New System of Vessels in Our Brains

It is unclear what they do—but they likely play a central role in aging and disease.

A transparent model of the brain with a network of vessels filled in
Daniel Reich / National Institute of Neurological Disorders and Stroke

You are now among the first people to see the brain’s lymphatic system. The vessels in the photo above transport fluid that is likely crucial to metabolic and inflammatory processes. Until now, no one knew for sure that they existed.

Doctors practicing today have been taught that there are no lymphatic vessels inside the skull. Those deep-purple vessels were seen for the first time in images published this week by researchers at the U.S. National Institute of Neurological Disorders and Stroke.

In the rest of the body, the lymphatic system collects and drains the fluid that bathes our cells, in the process exporting their waste. It also serves as a conduit for immune cells, which go out into the body looking for adversaries and learning how to distinguish self from other, and then travel back to lymph nodes and organs through lymphatic vessels.

So how was it even conceivable that this process wasn’t happening in our brains?

Reich (Daniel Reich, senior investigator) started his search in 2015, after a major study in Nature reported a similar conduit for lymph in mice. The University of Virginia team wrote at the time, “The discovery of the central-nervous-system lymphatic system may call for a reassessment of basic assumptions in neuroimmunology.” The study was regarded as a potential breakthrough in understanding how neurodegenerative disease is associated with the immune system.

Around the same time, researchers discovered fluid in the brains of mice and humans that would become known as the “glymphatic system.” [emphasis mine] It was described by a team at the University of Rochester in 2015 as not just the brain’s “waste-clearance system,” but as potentially helping fuel the brain by transporting glucose, lipids, amino acids, and neurotransmitters. Although since “the central nervous system completely lacks conventional lymphatic vessels,” the researchers wrote at the time, it remained unclear how this fluid communicated with the rest of the body.

There are occasional references to the idea of a lymphatic system in the brain in historic literature. Two centuries ago, the anatomist Paolo Mascagni made full-body models of the lymphatic system that included the brain, though this was dismissed as an error. [emphases mine]  A historical account in The Lancet in 2003 read: “Mascagni was probably so impressed with the lymphatic system that he saw lymph vessels even where they did not exist—in the brain.”

I couldn’t resist the reference to someone whose work had been dismissed summarily being proved right, eventually, and with the help of mouse brains. Do read Hamblin’s article in its entirety if you have time as these excerpts don’t do it justice.

Getting back to Princeton’s research, here’s their research paper,

Digital museum of retinal ganglion cells with dense anatomy and physiology,” by Alexander Bae, Shang Mu, Jinseop Kim, Nicholas Turner, Ignacio Tartavull, Nico Kemnitz, Chris Jordan, Alex Norton, William Silversmith, Rachel Prentki, Marissa Sorek, Celia David, Devon Jones, Doug Bland, Amy Sterling, Jungman Park, Kevin Briggman, Sebastian Seung and the Eyewirers, was published May 17 in the journal Cell with DOI 10.1016/j.cell.2018.04.040.

The research was supported by the Gatsby Charitable Foundation, National Institute of Health-National Institute of Neurological Disorders and Stroke (U01NS090562 and 5R01NS076467), Defense Advanced Research Projects Agency (HR0011-14-2- 0004), Army Research Office (W911NF-12-1-0594), Intelligence Advanced Research Projects Activity (D16PC00005), KT Corporation, Amazon Web Services Research Grants, Korea Brain Research Institute (2231-415) and Korea National Research Foundation Brain Research Program (2017M3C7A1048086).

This paper is behind a paywall. For the players amongst us, here’s the Eyewire website. Go forth,  play, and, maybe, discover new neurons!

The sound of frogs (and other amphibians) and climate change

At least once a year I highlight some work about frogs. It’s usually about a new species but this time, it’s all about frog sounds (as well as, sounds from other amphibians).

Caption: The calls of the midwife toad and other amphibians have served to test the sound classifier. Credit: Jaime Bosch (MNCN-CSIC)

In any event, here’s more from an April 30, 2018 Spanish Foundation for Science and Technology (FECYT) press release (also on EurekAlert but with a May 17, 2018 publication date),

The sounds of amphibians are altered by the increase in ambient temperature, a phenomenon that, in addition to interfering with reproductive behaviour, serves as an indicator of global warming. Researchers at the University of Seville have resorted to artificial intelligence to create an automatic classifier of the thousands of frog and toad sounds that can be recorded in a natural environment.

One of the consequences of climate change is its impact on the physiological functions of animals, such as frogs and toads with their calls. Their mating call, which plays a crucial role in the sexual selection and reproduction of these amphibians, is affected by the increase in ambient temperature.

When this exceeds a certain threshold, the physiological processes associated with the sound production are restricted, and some calls are even actually inhibited. In fact, the beginning, duration and intensity of calls from the male to the female are changed, which influences reproductive activity.

Taking into account this phenomenon, the analysis and classification of the sounds produced by certain species of amphibians and other animals have turned out to be a powerful indicator of temperature fluctuations and, therefore, of the existence and evolution of global warming.

To capture the sounds of frogs, networks of audio sensors are placed and connected wirelessly in areas that can reach several hundred square kilometres. The problem is that a huge amount of bio-acoustic information is collected in environments as noisy as a jungle, and this makes it difficult to identify the species and their calls.

To solve this, engineers from the University of Seville have resorted to artificial intelligence. “We’ve segmented the sound into temporary windows or audio frames and have classified them by means of decision trees, an automatic learning technique that is used in computing”, explains Amalia Luque Sendra, co-author of the work.

To perform the classification, the researchers have based it on MPEG-7 parameters and audio descriptors, a standard way of representing audiovisual information. The details are published in Expert Systems with Applications magazine.

This technique has been put to the test with real sounds of amphibians recorded in the middle of nature and provided by the National Museum of Natural Sciences. More specifically, 868 records with 369 mating calls sung by the male and 63 release calls issued by the female natterajck toad (Epidalea calamita), along with 419 mating calls and 17 distress calls of the common midwife toad (Alytesobstetricans).

“In this case we obtained a success rate close to 90% when classifying the sounds,” observes Luque Sendra, who recalls that, in addition to the types of calls, the number of individuals of certain amphibian species that are heard in a geographical region over time can also be used as an indicator of climate change.

“A temperature increase affects the calling patterns,” she says, “but since these in most cases have a sexual calling nature, they also affect the number of individuals. With our method, we still can’t directly determine the exact number of specimens in an area, but it is possible to get a first approximation.”

In addition to the image of the midwife toad, the researchers included this image to illustrate their work,

Caption: This is the architecture of a wireless sensor network. Credit: J. Luque et al./Sensors

Here’s a link to and a citation for the paper,

Non-sequential automatic classification of anuran sounds for the estimation of climate-change indicators by Amalia Luque, Javier Romero-Lemos, Alejandro Carrasco, Julio Barbancho. Expert Systems with Applications Volume 95, 1 April 2018, Pages 248-260 DOI: https://doi.org/10.1016/j.eswa.2017.11.016 Available online 10 November 2017

This paper is open access.