Category Archives: writing

AI (artificial intelligence) artist got a show at a New York City art gallery

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

It has also, Bogost notes in his article, occasioned an art show (Note: Links have been removed),

… part of “Faceless Portraits Transcending Time,” an exhibition of prints recently shown [Februay 13 – March 5, 2019] at the HG Contemporary gallery in Chelsea, the epicenter of New York’s contemporary-art world. All of them were created by a computer.

The catalog calls the show a “collaboration between an artificial intelligence named AICAN and its creator, Dr. Ahmed Elgammal,” a move meant to spotlight, and anthropomorphize, the machine-learning algorithm that did most of the work. According to HG Contemporary, it’s the first solo gallery exhibit devoted to an AI artist.

If they hadn’t found each other in the New York art scene, the players involved could have met on a Spike Jonze film set: a computer scientist commanding five-figure print sales from software that generates inkjet-printed images; a former hotel-chain financial analyst turned Chelsea techno-gallerist with apparent ties to fine-arts nobility; a venture capitalist with two doctoral degrees in biomedical informatics; and an art consultant who put the whole thing together, A-Team–style, after a chance encounter at a blockchain conference. Together, they hope to reinvent visual art, or at least to cash in on machine-learning hype along the way.

The show in New York City, “Faceless Portraits …,” exhibited work by an artificially intelligent artist-agent (I’m creating a new term to suit my purposes) that’s different than the one used by Obvious to create “Portrait of Edmond de Belamy,” As noted earlier, it sold for a lot of money (Note: Links have been removed),

Bystanders in and out of the art world were shocked. The print had never been shown in galleries or exhibitions before coming to market at auction, a channel usually reserved for established work. The winning bid was made anonymously by telephone, raising some eyebrows; art auctions can invite price manipulation. It was created by a computer program that generates new images based on patterns in a body of existing work, whose features the AI “learns.” What’s more, the artists who trained and generated the work, the French collective Obvious, hadn’t even written the algorithm or the training set. They just downloaded them, made some tweaks, and sent the results to market.

“We are the people who decided to do this,” the Obvious member Pierre Fautrel said in response to the criticism, “who decided to print it on canvas, sign it as a mathematical formula, put it in a gold frame.” A century after Marcel Duchamp made a urinal into art [emphasis mine] by putting it in a gallery, not much has changed, with or without computers. As Andy Warhol famously said, “Art is what you can get away with.”

A bit of a segue here, there is a controversy as to whether or not that ‘urinal art’, also known as, The Fountain, should be attributed to Duchamp as noted in my January 23, 2019 posting titled ‘Baroness Elsa von Freytag-Loringhoven, Marcel Duchamp, and the Fountain’.

Getting back to the main action, Bogost goes on to describe the technologies underlying the two different AI artist-agents (Note: Links have been removed),

… Using a computer is hardly enough anymore; today’s machines offer all kinds of ways to generate images that can be output, framed, displayed, and sold—from digital photography to artificial intelligence. Recently, the fashionable choice has become generative adversarial networks, or GANs, the technology that created Portrait of Edmond de Belamy. Like other machine-learning methods, GANs use a sample set—in this case, art, or at least images of it—to deduce patterns, and then they use that knowledge to create new pieces. A typical Renaissance portrait, for example, might be composed as a bust or three-quarter view of a subject. The computer may have no idea what a bust is, but if it sees enough of them, it might learn the pattern and try to replicate it in an image.

GANs use two neural nets (a way of processing information modeled after the human brain) to produce images: a “generator” and a “discerner.” The generator produces new outputs—images, in the case of visual art—and the discerner tests them against the training set to make sure they comply with whatever patterns the computer has gleaned from that data. The quality or usefulness of the results depends largely on having a well-trained system, which is difficult.

That’s why folks in the know were upset by the Edmond de Belamy auction. The image was created by an algorithm the artists didn’t write, trained on an “Old Masters” image set they also didn’t create. The art world is no stranger to trend and bluster driving attention, but the brave new world of AI painting appeared to be just more found art, the machine-learning equivalent of a urinal on a plinth.

Ahmed Elgammal thinks AI art can be much more than that. A Rutgers University professor of computer science, Elgammal runs an art-and-artificial-intelligence lab, where he and his colleagues develop technologies that try to understand and generate new “art” (the scare quotes are Elgammal’s) with AI—not just credible copies of existing work, like GANs do. “That’s not art, that’s just repainting,” Elgammal says of GAN-made images. “It’s what a bad artist would do.”

Elgammal calls his approach a “creative adversarial network,” or CAN. It swaps a GAN’s discerner—the part that ensures similarity—for one that introduces novelty instead. The system amounts to a theory of how art evolves: through small alterations to a known style that produce a new one. That’s a convenient take, given that any machine-learning technique has to base its work on a specific training set.

The results are striking and strange, although calling them a new artistic style might be a stretch. They’re more like credible takes on visual abstraction. The images in the show, which were produced based on training sets of Renaissance portraits and skulls, are more figurative, and fairly disturbing. Their gallery placards name them dukes, earls, queens, and the like, although they depict no actual people—instead, human-like figures, their features smeared and contorted yet still legible as portraiture. Faceless Portrait of a Merchant, for example, depicts a torso that might also read as the front legs and rear haunches of a hound. Atop it, a fleshy orb comes across as a head. The whole scene is rippled by the machine-learning algorithm, in the way of so many computer-generated artworks.

Faceless Portrait of a Merchant, one of the AI portraits produced by Ahmed Elgammal and AICAN. (Artrendex Inc.) [downloaded from https://www.theatlantic.com/technology/archive/2019/03/ai-created-art-invades-chelsea-gallery-scene/584134/]

Bogost consults an expert on portraiture for a discussion about the particularities of portraiture and the shortcomings one might expect of an AI artist-agent (Note: A link has been removed),

“You can’t really pick a form of painting that’s more charged with cultural meaning than portraiture,” John Sharp, an art historian trained in 15th-century Italian painting and the director of the M.F.A. program in design and technology at Parsons School of Design, told me. The portrait isn’t just a style, it’s also a host for symbolism. “For example, men might be shown with an open book to show how they are in dialogue with that material; or a writing implement, to suggest authority; or a weapon, to evince power.” Take Portrait of a Youth Holding an Arrow, an early-16th-century Boltraffio portrait that helped train the AICAN database for the show. The painting depicts a young man, believed to be the Bolognese poet Girolamo Casio, holding an arrow at an angle in his fingers and across his chest. It doubles as both weapon and quill, a potent symbol of poetry and aristocracy alike. Along with the arrow, the laurels in Casio’s hair are emblems of Apollo, the god of both poetry and archery.

A neural net couldn’t infer anything about the particular symbolic trappings of the Renaissance or antiquity—unless it was taught to, and that wouldn’t happen just by showing it lots of portraits. For Sharp and other critics of computer-generated art, the result betrays an unforgivable ignorance about the supposed influence of the source material.

But for the purposes of the show, the appeal to the Renaissance might be mostly a foil, a way to yoke a hip, new technology to traditional painting in order to imbue it with the gravity of history: not only a Chelsea gallery show, but also an homage to the portraiture found at the Met. To reinforce a connection to the cradle of European art, some of the images are presented in elaborate frames, a decision the gallerist, Philippe Hoerle-Guggenheim (yes, that Guggenheim; he says the relation is “distant”) [the Guggenheim is strongly associated with the visual arts by way the two Guggeheim museums, one in New York City and the other in Bilbao, Portugal], told me he insisted upon. Meanwhile, the technical method makes its way onto the gallery placards in an official-sounding way—“Creative Adversarial Network print.” But both sets of inspirations, machine-learning and Renaissance portraiture, get limited billing and zero explanation at the show. That was deliberate, Hoerle-Guggenheim said. He’s betting that the simple existence of a visually arresting AI painting will be enough to draw interest—and buyers. It would turn out to be a good bet.

The art market is just that: a market. Some of the most renowned names in art today, from Damien Hirst to Banksy, trade in the trade of art as much as—and perhaps even more than—in the production of images, objects, and aesthetics. No artist today can avoid entering that fray, Elgammal included. “Is he an artist?” Hoerle-Guggenheim asked himself of the computer scientist. “Now that he’s in this context, he must be.” But is that enough? In Sharp’s estimation, “Faceless Portraits Transcending Time” is a tech demo more than a deliberate oeuvre, even compared to the machine-learning-driven work of his design-and-technology M.F.A. students, who self-identify as artists first.

Judged as Banksy or Hirst might be, Elgammal’s most art-worthy work might be the Artrendex start-up itself, not the pigment-print portraits that its technology has output. Elgammal doesn’t treat his commercial venture like a secret, but he also doesn’t surface it as a beneficiary of his supposedly earnest solo gallery show. He’s argued that AI-made images constitute a kind of conceptual art, but conceptualists tend to privilege process over product or to make the process as visible as the product.

Hoerle-Guggenheim worked as a financial analyst for Hyatt before getting into the art business via some kind of consulting deal (he responded cryptically when I pressed him for details). …

This is a fascinating article and I have one last excerpt, which poses this question, is an AI artist-agent a collaborator or a medium? There ‘s also speculation about how AI artist-agents might impact the business of art (Note: Links have been removed),

… it’s odd to list AICAN as a collaborator—painters credit pigment as a medium, not as a partner. Even the most committed digital artists don’t present the tools of their own inventions that way; when they do, it’s only after years, or even decades, of ongoing use and refinement.

But Elgammal insists that the move is justified because the machine produces unexpected results. “A camera is a tool—a mechanical device—but it’s not creative,” he said. “Using a tool is an unfair term for AICAN. It’s the first time in history that a tool has had some kind of creativity, that it can surprise you.” Casey Reas, a digital artist who co-designed the popular visual-arts-oriented coding platform Processing, which he uses to create some of his fine art, isn’t convinced. “The artist should claim responsibility over the work rather than to cede that agency to the tool or the system they create,” he told me.

Elgammal’s financial interest in AICAN might explain his insistence on foregrounding its role. Unlike a specialized print-making technique or even the Processing coding environment, AICAN isn’t just a device that Elgammal created. It’s also a commercial enterprise.

Elgammal has already spun off a company, Artrendex, that provides “artificial-intelligence innovations for the art market.” One of them offers provenance authentication for artworks; another can suggest works a viewer or collector might appreciate based on an existing collection; another, a system for cataloging images by visual properties and not just by metadata, has been licensed by the Barnes Foundation to drive its collection-browsing website.

The company’s plans are more ambitious than recommendations and fancy online catalogs. When presenting on a panel about the uses of blockchain for managing art sales and provenance, Elgammal caught the attention of Jessica Davidson, an art consultant who advises artists and galleries in building collections and exhibits. Davidson had been looking for business-development partnerships, and she became intrigued by AICAN as a marketable product. “I was interested in how we can harness it in a compelling way,” she says.

The art market is just that: a market. Some of the most renowned names in art today, from Damien Hirst to Banksy, trade in the trade of art as much as—and perhaps even more than—in the production of images, objects, and aesthetics. No artist today can avoid entering that fray, Elgammal included. “Is he an artist?” Hoerle-Guggenheim asked himself of the computer scientist. “Now that he’s in this context, he must be.” But is that enough? In Sharp’s estimation, “Faceless Portraits Transcending Time” is a tech demo more than a deliberate oeuvre, even compared to the machine-learning-driven work of his design-and-technology M.F.A. students, who self-identify as artists first.

Judged as Banksy or Hirst might be, Elgammal’s most art-worthy work might be the Artrendex start-up itself, not the pigment-print portraits that its technology has output. Elgammal doesn’t treat his commercial venture like a secret, but he also doesn’t surface it as a beneficiary of his supposedly earnest solo gallery show. He’s argued that AI-made images constitute a kind of conceptual art, but conceptualists tend to privilege process over product or to make the process as visible as the product.

Hoerle-Guggenheim worked as a financial analyst[emphasis mine] for Hyatt before getting into the art business via some kind of consulting deal (he responded cryptically when I pressed him for details). …

If you have the time, I recommend reading Bogost’s March 6, 2019 article for The Atlantic in its entirety/ these excerpts don’t do it enough justice.

Portraiture: what does it mean these days?

After reading the article I have a few questions. What exactly do Bogost and the arty types in the article mean by the word ‘portrait’? “Portrait of Edmond de Belamy” is an image of someone who doesn’t and never has existed and the exhibit “Faceless Portraits Transcending Time,” features images that don’t bear much or, in some cases, any resemblance to human beings. Maybe this is considered a dull question by people in the know but I’m an outsider and I found the paradox: portraits of nonexistent people or nonpeople kind of interesting.

BTW, I double-checked my assumption about portraits and found this definition in the Portrait Wikipedia entry (Note: Links have been removed),

A portrait is a painting, photograph, sculpture, or other artistic representation of a person [emphasis mine], in which the face and its expression is predominant. The intent is to display the likeness, personality, and even the mood of the person. For this reason, in photography a portrait is generally not a snapshot, but a composed image of a person in a still position. A portrait often shows a person looking directly at the painter or photographer, in order to most successfully engage the subject with the viewer.

So, portraits that aren’t portraits give rise to some philosophical questions but Bogost either didn’t want to jump into that rabbit hole (segue into yet another topic) or, as I hinted earlier, may have assumed his audience had previous experience of those kinds of discussions.

Vancouver (Canada) and a ‘portraiture’ exhibit at the Rennie Museum

By one of life’s coincidences, Vancouver’s Rennie Museum had an exhibit (February 16 – June 15, 2019) that illuminates questions about art collecting and portraiture, From a February 7, 2019 Rennie Museum news release,

‘downloaded from https://renniemuseum.org/press-release-spring-2019-collected-works/] Courtesy: Rennie Museum

February 7, 2019

Press Release | Spring 2019: Collected Works
By rennie museum

rennie museum is pleased to present Spring 2019: Collected Works, a group exhibition encompassing the mediums of photography, painting and film. A portraiture of the collecting spirit [emphasis mine], the works exhibited invite exploration of what collected objects, and both the considered and unintentional ways they are displayed, inform us. Featuring the works of four artists—Andrew Grassie, William E. Jones, Louise Lawler and Catherine Opie—the exhibition runs from February 16 to June 15, 2019.

Four exquisite paintings by Scottish painter Andrew Grassie detailing the home and private storage space of a major art collector provide a peek at how the passionately devoted integrates and accommodates the physical embodiments of such commitment into daily life. Grassie’s carefully constructed, hyper-realistic images also pose the question, “What happens to art once it’s sold?” In the transition from pristine gallery setting to idiosyncratic private space, how does the new context infuse our reading of the art and how does the art shift our perception of the individual?

Furthering the inquiry into the symbiotic exchange between possessor and possession, a selection of images by American photographer Louise Lawler depicting art installed in various private and public settings question how the bilateral relationship permeates our interpretation when the collector and the collected are no longer immediately connected. What does de-acquisitioning an object inform us and how does provenance affect our consideration of the art?

The question of legacy became an unexpected facet of 700 Nimes Road (2010-2011), American photographer Catherine Opie’s portrait of legendary actress Elizabeth Taylor. Opie did not directly photograph Taylor for any of the fifty images in the expansive portfolio. Instead, she focused on Taylor’s home and the objects within, inviting viewers to see—then see beyond—the façade of fame and consider how both treasures and trinkets act as vignettes to the stories of a life. Glamorous images of jewels and trophies juxtapose with mundane shots of a printer and the remote-control user manual. Groupings of major artworks on the wall are as illuminating of the home’s mistress as clusters of personal photos. Taylor passed away part way through Opie’s project. The subsequent photos include Taylor’s mementos heading off to auction, raising the question, “Once the collections that help to define someone are disbursed, will our image of that person lose focus?”

In a similar fashion, the twenty-two photographs in Villa Iolas (1982/2017), by American artist and filmmaker William E. Jones, depict the Athens home of iconic art dealer and collector Alexander Iolas. Taken in 1982 by Jones during his first travels abroad, the photographs of art, furniture and antiquities tell a story of privilege that contrast sharply with the images Jones captures on a return visit in 2016. Nearly three decades after Iolas’s 1989 death, his home sits in dilapidation, looted and vandalized. Iolas played an extraordinary role in the evolution of modern art, building the careers of Max Ernst, Yves Klein and Giorgio de Chirico. He gave Andy Warhol his first solo exhibition and was a key advisor to famed collectors John and Dominique de Menil. Yet in the years since his death, his intention of turning his home into a modern art museum as a gift to Greece, along with his reputation, crumbled into ruins. The photographs taken by Jones during his visits in two different eras are incorporated into the film Fall into Ruin (2017), along with shots of contemporary Athens and antiquities on display at the National Archaeological Museum.

“I ask a lot of questions about how portraiture functionswhat is there to describe the person or time we live in or a certain set of politics…”
 – Catherine Opie, The Guardian, Feb 9, 2016

We tend to think of the act of collecting as a formal activity yet it can happen casually on a daily basis, often in trivial ways. While we readily acknowledge a collector consciously assembling with deliberate thought, we give lesser consideration to the arbitrary accumulations that each of us accrue. Be it master artworks, incidental baubles or random curios, the objects we acquire and surround ourselves with tell stories of who we are.

Andrew Grassie (Scotland, b. 1966) is a painter known for his small scale, hyper-realist works. He has been the subject of solo exhibitions at the Tate Britain; Talbot Rice Gallery, Edinburgh; institut supérieur des arts de Toulouse; and rennie museum, Vancouver, Canada. He lives and works in London, England.

William E. Jones (USA, b. 1962) is an artist, experimental film-essayist and writer. Jones’s work has been the subject of retrospectives at Tate Modern, London; Anthology Film Archives, New York; Austrian Film Museum, Vienna; and, Oberhausen Short Film Festival. He is a recipient of the John Simon Guggenheim Memorial Fellowship and the Creative Capital/Andy Warhol Foundation Arts Writers Grant. He lives and works in Los Angeles, USA.

Louise Lawler (USA, b. 1947) is a photographer and one of the foremost members of the Pictures Generation. Lawler was the subject of a major retrospective at the Museum of Modern Art, New York in 2017. She has held exhibitions at the Whitney Museum of American Art, New York; Stedelijk Museum, Amsterdam; National Museum of Art, Oslo; and Musée d’Art Moderne de La Ville de Paris. She lives and works in New York.

Catherine Opie (USA, b. 1961) is a photographer and educator. Her work has been exhibited at Wexner Center for the Arts, Ohio; Henie Onstad Art Center, Oslo; Los the Angeles County Museum of Art; Portland Art Museum; and the Guggenheim Museum, New York. She is the recipient of United States Artist Fellowship, Julius Shulman’s Excellence in Photography Award, and the Smithsonian’s Archive of American Art Medal.  She lives and works in Los Angeles.

rennie museum opened in October 2009 in historic Wing Sang, the oldest structure in Vancouver’s Chinatown, to feature dynamic exhibitions comprising only of art drawn from rennie collection. Showcasing works by emerging and established international artists, the exhibits, accompanied by supporting catalogues, are open free to the public through engaging guided tours. The museum’s commitment to providing access to arts and culture is also expressed through its education program, which offers free age-appropriate tours and customized workshops to children of all ages.

rennie collection is a globally recognized collection of contemporary art that focuses on works that tackle issues related to identity, social commentary and injustice, appropriation, and the nature of painting, photography, sculpture and film. Currently the collection includes works by over 370 emerging and established artists, with over fifty collected in depth. The Vancouver based collection engages actively with numerous museums globally through a robust, artist-centric, lending policy.

So despite the Wikipedia definition, it seems that portraits don’t always feature people. While Bogost didn’t jump into that particular rabbit hole, he did touch on the business side of art.

What about intellectual property?

Bogost doesn’t explicitly discuss this particular issue. It’s a big topic so I’m touching on it only lightly, if an artist worsk with an AI, the question as to ownership of the artwork could prove thorny. Is the copyright owner the computer scientist or the artist or both? Or does the AI artist-agent itself own the copyright? That last question may not be all that farfetched. Sophia, a social humanoid robot, has occasioned thought about ‘personhood.’ (Note: The robots mentioned in this posting have artificial intelligence.) From the Sophia (robot) Wikipedia entry (Note: Links have been removed),

Sophia has been interviewed in the same manner as a human, striking up conversations with hosts. Some replies have been nonsensical, while others have impressed interviewers such as 60 Minutes’ Charlie Rose.[12] In a piece for CNBC, when the interviewer expressed concerns about robot behavior, Sophia joked that he had “been reading too much Elon Musk. And watching too many Hollywood movies”.[27] Musk tweeted that Sophia should watch The Godfather and asked “what’s the worst that could happen?”[28][29] Business Insider’s chief UK editor Jim Edwards interviewed Sophia, and while the answers were “not altogether terrible”, he predicted it was a step towards “conversational artificial intelligence”.[30] At the 2018 Consumer Electronics Show, a BBC News reporter described talking with Sophia as “a slightly awkward experience”.[31]

On October 11, 2017, Sophia was introduced to the United Nations with a brief conversation with the United Nations Deputy Secretary-General, Amina J. Mohammed.[32] On October 25, at the Future Investment Summit in Riyadh, the robot was granted Saudi Arabian citizenship [emphasis mine], becoming the first robot ever to have a nationality.[29][33] This attracted controversy as some commentators wondered if this implied that Sophia could vote or marry, or whether a deliberate system shutdown could be considered murder. Social media users used Sophia’s citizenship to criticize Saudi Arabia’s human rights record. In December 2017, Sophia’s creator David Hanson said in an interview that Sophia would use her citizenship to advocate for women’s rights in her new country of citizenship; Newsweek criticized that “What [Hanson] means, exactly, is unclear”.[34] On November 27, 2018 Sophia was given a visa by Azerbaijan while attending Global Influencer Day Congress held in Baku. December 15, 2018 Sophia was appointed a Belt and Road Innovative Technology Ambassador by China'[35]

As for an AI artist-agent’s intellectual property rights , I have a July 10, 2017 posting featuring that question in more detail. Whether you read that piece or not, it seems obvious that artists might hesitate to call an AI agent, a partner rather than a medium of expression. After all, a partner (and/or the computer scientist who developed the programme) might expect to share in property rights and profits but paint, marble, plastic, and other media used by artists don’t have those expectations.

Moving slightly off topic , in my July 10, 2017 posting I mentioned a competition (literary and performing arts rather than visual arts) called, ‘Dartmouth College and its Neukom Institute Prizes in Computational Arts’. It was started in 2016 and, as of 2018, was still operational under this name: Creative Turing Tests. Assuming there’ll be contests for prizes in 2019, there’s (from the contest site) [1] PoetiX, competition in computer-generated sonnet writing; [2] Musical Style, composition algorithms in various styles, and human-machine improvisation …; and [3] DigiLit, algorithms able to produce “human-level” short story writing that is indistinguishable from an “average” human effort. You can find the contest site here.

Chen Qiufan, garbage, and Chinese science fiction stories

Garbage has been dominating Canadian news headlines for a few weeks now. First, it was Canadian garbage in the Philippines and now it’s Canadian garbage in Malaysia. Interestingly, we’re also having problems with China, since December 2018, when we detained a top executive from Huawe, a China-based international telecommunicatons company, in accordance with an official request from the US government and, in accordance, with what Prime Minister Justin Trudeau calls the ‘rule of law’. All of this provides an interesting backdrop (for Canadians anyway) on the topic of China, garbage, and science fiction.

A May 16, 2019 article by Anjie Zheng for Fast Company explores some of the latest and greatest from China’s science fiction writing community,

Like any good millennial, I think about my smartphone, to the extent that I do at all, in terms of what it does for me. It lets me message friends, buy stuff quickly, and amass likes. I hardly ever think about what it actually is—a mass of copper wires, aluminum alloys, and lithium battery encased in glass—or where it goes when I upgrade.

Chen Qiufan wants us to think about that. His debut novel, Waste Tide, is set in a lightly fictionalized version of Guiyu, the world’s largest electronic waste disposal. First published in Chinese in 2013, the book was recently released in the U.S. with a very readable translation into English by Ken Liu.

Chen, who has been called “China’s William Gibson,” is part of a younger generation of sci-fi writers who have achieved international acclaim in recent years. Liu Cixin became the first Chinese to win the prestigious Hugo Award for his Three Body Problem in 2015. The Wandering Earth, based on a short story by Liu, became China’s first science-fiction blockbuster when it was released in 2018. It was the highest-grossing film in the fastest-growing film market in the world last year and was recently scooped up by Netflix.

Aynne Kokas in a March 13, 2019 article for the Washington Post describes how the hit film, The Wandering Earth, fits into an overall Chinese-led movie industry focused on the future and Hollywood-like, i. e. like US movie industry, domination,

“The Wandering Earth,” directed by Frant Gwo, takes place in a future where the people of Earth must flee their sun as it swells into a red giant. Thousands of engines — the first of them constructed in Hangzhou, one of China’s tech hubs — propel the entire planet toward a new solar system, while everyone takes refuge from the cold in massive underground cities. On the surface, the only visible reminders of the past are markers of China’s might. The Shanghai Tower, the Oriental Pearl Tower and a stadium for the Shanghai 2044 Olympics all thrust out of the ice, having apparently survived the journey’s tsunamis, deep freeze and cliff-collapsing earthquakes.

The movie is China’s first big-budget sci-fi epic, and its production was ambitious, involving some 7,000 workers and 10,000 specially-built props. Audience excitement was correspondingly huge: Nearly half a million people wrote reviews of the film on Chinese social network site Douban. Having earned over $600 million in domestic sales, “The Wandering Earth” marks a major achievement for the country’s film industry.

It is also a major achievement for the Chinese government.

Since opening up the country’s film market in 2001, the Chinese government has aspired to learn from Hollywood how to make commercially appealing films, as I detail in my book “Hollywood Made in China.” From initial private offerings for state media companies, to foreign investment in films, studios and theme parks, the government allowed outside capital and expertise to grow the domestic commercial film industry — but not at the expense of government oversight. This policy’s underlying aim was to expand China’s cultural clout and political influence.

Until recently, Hollywood films dominated the country’s growing box office. That finally changed in 2015, with the release of major local blockbusters “Monster Hunt” and “Lost in Hong Kong.” The proliferation of homegrown hits signaled that the Chinese box office profits no longer depend on Hollywood studio films — sending an important message to foreign trade negotiators and studios.

Kokas provides some insight into how the Chinese movie industry is designed to further the Chinese government’s vision of the future. As a Canadian, I don’t see that much difference between the US and China industry’s vision. Both tout themselves as the answer to everything, both target various geographic regions for the ‘bad guys’, and both tout their national moral superiority in their films. I suppose the same can be said for most countries’ film industries but both China and the US can back themselves with economic might.

Zheng’s article delves deeper into garbage, and Chen Qiufan’s science fiction while illuminating the process of changing a ‘good guy’ into a ‘bad guy’,

Chen, 37, grew up a few miles from the real Guiyu. Mountains of scrap electronics are shipped there every year from around the world. Thousands of human workers sort through the junk for whatever can be reduced to reusable precious metals. They strip wires and disassemble circuit boards, soaking them in acid baths for bits of copper, tin, platinum, and gold. Whatever can’t be processed is burned. The water in Guiyu has been so contaminated it is undrinkable; the air is toxic. The workers, migrants from poor rural areas in China, have an abnormally high rate of respiratory diseases and cancer.

For the decades China was revving its economic engine, authorities were content to turn a blind eye to the human costs of the recycling business. It was an economic win-win. For developed countries like the U.S., it’s cheaper to ship waste to places like China than trying to recycle it themselves. And these shipments create jobs and profits for the Chinese.

In recent years, however, steps have been taken to protect workers and the environment in China. …

Waste Tide highlights the danger of “throw-away culture,” says Chen, also known in English as Stanley Chan. When our personal electronics stop serving us, whether because they break or our lust for the newest specs get the better of us, we toss them. Hopefully we’re conscientious enough to bring them to local recyclers that claim they’ll dispose of them properly. But that’s likely the end of our engagement with the trash. Out of sight, out of mind.

Fiction, and science fiction in particular, is an apt medium for Chen to probe the consequences of this arrangement. “It’s not journalism,” he says. Instead, the story is an imaginative, action-packed tale of power imbalances, and the individual characters that think they’re doing good. Waste Tide culminates, expectedly, in an insurgency of the workers against their exploitative overlords.

Guiyu has been fictionalized in Waste Tide as “Silicon Isle.” (A homophone of the Chinese character “gui” translates to “Silicon,” and “yu” is an island). The waste hell is ruled by three ruthless family clans, dominated by the Luo clan. They treat workers as slaves and derisively call them “waste people.”

Technology in the near-future has literally become extensions of selves and only exacerbates class inequality. Prosthetic inner ears improve balance; prosthetic limbs respond to mental directives; helmets heighten natural senses. The rich “switch body parts as easily as people used to switch phones.” Those with fewer means hack discarded prosthetics to get the same kick. When they’re no longer needed, synthetic body parts contaminated with blood and bodily fluids are added to the detritus.

At the center of the story is Mimi, a migrant worker who dreams of earning enough money to return home and live a quiet life. She strikes up a relationship with Kaizong, a Chinese-American college graduate trying to rediscover his roots. But the good times are short-lived. The boss of the Luo clan becomes convinced that Mimi holds the key to rousing his son from his coma and soon kidnaps the hapless girl.

For all the advanced science, there is a backwards superstition that animates Silicon Isle. [emphasis mine] The clan bosses subscribe to “a simple form of animism.” They pray to the wind and sea for ample supplies of waste. They sacrifice animals (and some humans) to bring them luck, and use local witches to exorcise evil spirits. Boss Luo has Mimi kidnapped and tortured in an effort to appease the gods in the hopes of waking up his comatose son. The torture of Mimi infects her with a mysterious disease that splits her consciousness. The waste people are enraged by her violation, which eventually sparks a war against the ruling clans. [emphasis mine]

A parallel narrative involves an American, Scott Brandle, who works for an environmental company. While in town trying to set up a recycling facility, he stumbles onto the truth about the virus that may have infected Mimi: a chemical weapon developed and used by the U.S. [emphasis mine] years earlier. Invented by a Japanese researcher [emphasis mine] working in the U.S., the drug is capable of causing mass hallucinations and terror. When Brandle learns that Mimi may have been infected with this virus, he wants a piece of her [emphasis mine] too, so that scientists back home can study its effects.

Despite portraying the future of China in a less-than-positive light, [emphasis mine] Waste Tide has not been banned–a common result for works that displease Beijing; instead, the book won China’s prestigious Nebula award for science fiction, and is about to be reprinted on the mainland. …

An interview with Chen (it’s worthwhile to read his take on what he’s doing) follows the plot description in this intriguing and what seems to be a sometimes disingenuous article.

The animism and the war against the ruling class? It reminds me a little of the tales told about old Chine and Mao’s campaign to overthrow the ruling classes who had kept control of the proletariat, in part, by encouraging ‘superstitious religious belief’.

As far as I’m concerned the interpretation can go either or both ways: a critique of the current government’s policies and where they might lead in the future and/or a reference back to the glorious rising of China’s communist government. Good fiction always contains ambiguity; it’s what fuels courses in literature.

Also, the bad guys are from the US and Japan, countries which have long been allied with each other and with which China has some serious conflicts.

Interesting, non? And, it’s not that different from what you’ll see in US (or any other country’s for that matter) science fiction wiring and movies, except that the heroes are Chinese.

Getting back to the garbage in the Philippines, there are 69 containers on their way back to Canada as of May 30, 2019. As for why all this furor about Canadian garbage in the Philippines and Malaysia, it’s hard to believe that Canada is the only sinner. Of course, we are in China’s bad books due to the Huawei executive’s detention here (she is living in her home in Vancouver and goes out and about as she wishes, albeit under surveillance).

Anyway, I can’t help but wonder if indirect pressure is being exerted by China or if the Philippines and Malaysia have been incentivized in some way by China. The timing has certainly been interesting.

Political speculation aside, it’s probably a good thing that countries are refusing to take our garbage. As I’m sure more than one environmentalist would be happy to point out, it’s about time we took care of our own mess.

Storytelling, space, science, and a mini authors’ tour of Vancouver and Victoria (Canada)

I wasn’t expecting to go down a rabbit hole when I received an April 18, 2019 email announcement from Vancouver’s Curiosity Collider about an upcoming April 26, 2019 event but why not join me on the trip?

From the April 18, 2019 Curiosity Collider email,

Join astrophysicist / writer Elizabeth Tasker & young adult (YA) novelist Ria Voros as they share how discoveries of new worlds help tell stories of family

Curiosity Collider is co-hosting [emphasis mine] a special evening event with authors Ria Voros and Elizabeth Tasker. Ria and Elizabeth seem to be authors of a very different type: Ria is a YA novelist, while Elizabeth is an astrophysicist who writes popular science. The two authors will discuss how they came to work together unexpectedly through Ria’s novel. Ria will explain the process and research for her novel, The Centre of the Universe, and how the use of space metaphors help explain relationships between the characters. Elizabeth will then cast a scientific eye over these same metaphors, before moving on to talk in more depth about her own research and book, The Planet Factory

When: 7:00pm on Friday, April 26, 2019.
Where: Room 202, Hennings Building on UBC [University of British Columiba, Vancouver Endowment Lands] Campus (6224 Agricultural Road)
Cost: Free

Book signing to follow immediately after the event. UBC Bookstore will be on site with both Ria and Elizabeth’s books. 

Ria Voros is a YA author whose latest novel, The Centre of the Universe, explores the relationship between mothers and daughters and also explores a teen’s passion for astronomy. Ria has an MFA in creative writing from UBC and her books have been nominated for several awards across the country. She writes, teaches and lives in Victoria.

Elizabeth Tasker is an astrophysicist at Japan’s national space agency, JAXA. Her research uses computer models to explore how stars and planets form. She is a keen science communicator, writing principally about planets and space missions for publications that have included Scientific American, Astronomy Magazine and Room, and she is a regular feature writer for the NASA NExSS ‘Many Worlds’ online column. Her popular science book, The Planet Factory, comes out in paperback in Canada this April.

Curious as to what Tasker, an astrophysicist working in Japan, is doing here in BC, I noted the event is being cohosted by UBC’s Department of Physics and Astronomy (presumably Tasker is visiting colleagues and/or engaged on a sabbatical leave) along with Curiosity Collider. Not so coincidentally, Theresa Liao is the communications coordinator for the UBC department and is a member of the Curiosity Collider ‘team‘.

This April 26, 2019 Curiosity Collider event is the first of three of these authors’ events (according to my searches) within three days. The next is on April 27, 2019,. From the Royal BC Museum Astronomy Day (2019) event day webpage, (sometimes it’s ‘Astronomy Day’ and sometimes it’s ‘International Astronomy Day’)

The Royal Astronomical Society of Canada (Victoria Centre) will host the celebrations for International Astronomy Day [emphasis mine]. Join us and explore the mysteries of the universe!

2:30 PM – Science & Storytelling: How discoveries of new worlds help tell stories of family
By Ria Voros and Dr. Elizabeth Tasker

Ria and Elizabeth seem to be authors of a very different type: Ria is a “Young Adult” novelist, while Elizabeth writes popular science. The first part of this talk will tackle a crucial question: why are they presenting together? The two authors will discuss how they came to work together unexpectedly through Ria’s novel. Ria will then explain the process and research for her novel, The Centre of the Universe and how the use of space metaphors help explain relationships between the characters. Elizabeth will then cast a scientific eye over these same metaphors, before moving on to talk in more depth about her own research and book, The Planet Factory.

Event Details
April 27, 2019
10:00 am to 4:00 pm
Royal BC Museum
Free

Segue: I found more than one International Astronomy Day for 2019., the April 27, 2019 date in Victoria, BC, an April 28, 2019 date, and a May 11, 2019 date. As well, there is an International Astronomy Week being celebrated May 6 – 12, 2019 (as noted on the Royal Astronomical Society of Canada’s (RASC) Astronomy Events webpage). Lots of options for folks.

On the last date of this mini tour, the authors return to Vancouver for an April 28, 2019 event at the H. R. MacMillan Space Centre,

Passion for Astronomy: A Tale of Two Authors

Have you ever wondered how writers develop their stories? Have you ever wanted to write your own novel?

Join us Sunday, April 28th [2019] to find out how popular science author Dr. Elizabeth Tasker and Young Adult novelist Ria Voros develop their work. There is no charge to attend and all ages are welcome.

Learn how a shared passion for science and astronomy, and Ria’s latest novel ’The Centre of the Universe’, lead to a collaboration between these two authors.

Ria will be sharing the backstory and process she used to develop ’The Centre of the Universe’, and how she used space metaphors to help explore relationships between her characters. Elizabeth will shed a scientific light on the metaphors in Ria’s work before talking about her own research and book ’The Planet Factory’.

We will close the talk with a Q&A and book signing.

Located in the lower level auditorium.

Event Details
April 28, 2019 – 3:00pm to 4:00pm
Tickets

FREE ADMISSION. Reserve your seat on Evenbrite

Enjoy!

ETA April 21, 2019: I missed one stop on the tour. according to an April 19, 2019 article by Dana Gee for the Vancouver Sun, there will be two events on April 28, In addition to the one at the H. R. MacMillan Space Centre, Tasker and Voros will be hosted by the B.C. Humanist Association, from the BC Humanist Association’s Events webpage,

Sunday, April 28, 2019 at 10:00 AM
Oakridge Seniors Centre in Vancouver, BC, Canada
Vancouver Sunday Meeting: Elizabeth Tasker and Ria Voros – The Planet Factory

Dr Elizabeth Tasker is an associate professor at the Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Sciences (ISAS). Her research uses computer models to explore the formation of planets and galaxies. Her new book, The Planet Factory, tells the story of exoplanets, planets orbiting stars outside of our solar system.

She’ll be joined by Ria Voros, a Canadian author who’s new young adult book, The Centre of the Universe, follows 17 year old Grace, whose mother is missing. Grace is fascinated by exoplanets and meets Dr Tasker as a character in the story.

Both will discuss how they met and a bit about each of their books.

All are welcome to attend. Join us at 10 am for BYO coffee, tea, and socializing. At 10:30 am we start our presentation and discuss topics of interest to our members.

The BC Humanist Association was formed in 1984 and we have a regular attendance of over 30 people at our Sunday meetings.

Click here for more details on how to find the Centre. Our events are independent of the Seniors’ Centre and are open to people of all ages and backgrounds.

Now you have one more option.

AI (artificial intelligence) text generator, too dangerous to release?

Could this latest version of OpenAI‘s text generator be so good that it would fool you? And following on that thought could the concomitant reluctance to release the research be real or is it a publicity stunt? Here’s a sample of the text from the GPT2 AI model from a February 15, 2019 article by Mark Fraunfelder for Boing Boing,

Recycling is good for the world.
NO! YOU COULD NOT BE MORE WRONG!!
MODEL COMPLETION (MACHINE-WRITTEN, 25 TRIES)
Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming. Recycling is not good for our health. It contributes to obesity and diseases like heart disease and cancer. Recycling is bad for our economy. It increases the cost of a product, and in turn, the price of everything that is made with that product. Recycling is not good for our nation. We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system. Recycling is a huge, colossal waste of time, energy, money, and resources. And THAT is why we need to get back to basics and get back to basics in our recycling efforts. One of the best ways to start is to look at the process of creating a paper product. When you make a paper product, it is basically a long chain of materials. Everything from the raw materials (wood, cardboard, paper, etc.), to the reagents (dyes, solvents, etc.) to the printing equipment (chemicals, glue, paper, ink, etc.), to the packaging, to the packaging materials (mercury, chemicals, etc.) to the processing equipment (heating, cooling, etc.), to the packaging materials, to the packaging materials that are shipped overseas and to the packaging materials that are used in the United States. Each step along the way creates tons of waste that we constantly have to clean up. The process of making a paper product is a very wasteful one. But the end result is something that all of us need to consume. And if we want to keep the recycling process running efficiently, then we really need to think about each and every step that goes into making a paper product.

The first few sentences don’t work for me but once the discussion turns to making paper products, then it becomes more convincing to me. As to whether the company’s reluctance to release the research is genuine or a publicity stunt, I don’t know. However, there was a fair degree of interest in GPT2 after the decision.

From a February 14, 2019 article by Alex Hern for the Guardian,

OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

Feed it the opening line of George Orwell’s Nineteen Eighty-Four – “It was a bright cold day in April, and the clocks were striking thirteen” – and the system recognises the vaguely futuristic tone and the novelistic style, and continues with: …

Sean Gallagher’s February 15, 2019 posting on the ars Technica blog provides some insight that’s partially written a style sometimes associated with gossip (Note: Links have been removed),

OpenAI is funded by contributions from a group of technology executives and investors connected to what some have referred to as the PayPal “mafia”—Elon Musk, Peter Thiel, Jessica Livingston, and Sam Altman of YCombinator, former PayPal COO and LinkedIn co-founder Reid Hoffman, and former Stripe Chief Technology Officer Greg Brockman. [emphasis mine] Brockman now serves as OpenAI’s CTO. Musk has repeatedly warned of the potential existential dangers posed by AI, and OpenAI is focused on trying to shape the future of artificial intelligence technology—ideally moving it away from potentially harmful applications.

Given present-day concerns about how fake content has been used to both generate money for “fake news” publishers and potentially spread misinformation and undermine public debate, GPT-2’s output certainly qualifies as concerning. Unlike other text generation “bot” models, such as those based on Markov chain algorithms, the GPT-2 “bot” did not lose track of what it was writing about as it generated output, keeping everything in context.

For example: given a two-sentence entry, GPT-2 generated a fake science story on the discovery of unicorns in the Andes, a story about the economic impact of Brexit, a report about a theft of nuclear materials near Cincinnati, a story about Miley Cyrus being caught shoplifting, and a student’s report on the causes of the US Civil War.

Each matched the style of the genre from the writing prompt, including manufacturing quotes from sources. In other samples, GPT-2 generated a rant about why recycling is bad, a speech written by John F. Kennedy’s brain transplanted into a robot (complete with footnotes about the feat itself), and a rewrite of a scene from The Lord of the Rings.

While the model required multiple tries to get a good sample, GPT-2 generated “good” results based on “how familiar the model is with the context,” the researchers wrote. “When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50 percent of the time. The opposite is also true: on highly technical or esoteric types of content, the model can perform poorly.”

There were some weak spots encountered in GPT-2’s word modeling—for example, the researchers noted it sometimes “writes about fires happening under water.” But the model could be fine-tuned to specific tasks and perform much better. “We can fine-tune GPT-2 on the Amazon Reviews dataset and use this to let us write reviews conditioned on things like star rating and category,” the authors explained.

James Vincent’s February 14, 2019 article for The Verge offers a deeper dive into the world of AI text agents and what makes GPT2 so special (Note: Links have been removed),

For decades, machines have struggled with the subtleties of human language, and even the recent boom in deep learning powered by big data and improved processors has failed to crack this cognitive challenge. Algorithmic moderators still overlook abusive comments, and the world’s most talkative chatbots can barely keep a conversation alive. But new methods for analyzing text, developed by heavyweights like Google and OpenAI as well as independent researchers, are unlocking previously unheard-of talents.

OpenAI’s new algorithm, named GPT-2, is one of the most exciting examples yet. It excels at a task known as language modeling, which tests a program’s ability to predict the next word in a given sentence. Give it a fake headline, and it’ll write the rest of the article, complete with fake quotations and statistics. Feed it the first line of a short story, and it’ll tell you what happens to your character next. It can even write fan fiction, given the right prompt.

The writing it produces is usually easily identifiable as non-human. Although its grammar and spelling are generally correct, it tends to stray off topic, and the text it produces lacks overall coherence. But what’s really impressive about GPT-2 is not its fluency but its flexibility.

This algorithm was trained on the task of language modeling by ingesting huge numbers of articles, blogs, and websites. By using just this data — and with no retooling from OpenAI’s engineers — it achieved state-of-the-art scores on a number of unseen language tests, an achievement known as “zero-shot learning.” It can also perform other writing-related tasks, like translating text from one language to another, summarizing long articles, and answering trivia questions.

GPT-2 does each of these jobs less competently than a specialized system, but its flexibility is a significant achievement. Nearly all machine learning systems used today are “narrow AI,” meaning they’re able to tackle only specific tasks. DeepMind’s original AlphaGo program, for example, was able to beat the world’s champion Go player, but it couldn’t best a child at Monopoly. The prowess of GPT-2, say OpenAI, suggests there could be methods available to researchers right now that can mimic more generalized brainpower.

“What the new OpenAI work has shown is that: yes, you absolutely can build something that really seems to ‘understand’ a lot about the world, just by having it read,” says Jeremy Howard, a researcher who was not involved with OpenAI’s work but has developed similar language modeling programs …

To put this work into context, it’s important to understand how challenging the task of language modeling really is. If I asked you to predict the next word in a given sentence — say, “My trip to the beach was cut short by bad __” — your answer would draw upon on a range of knowledge. You’d consider the grammar of the sentence and its tone but also your general understanding of the world. What sorts of bad things are likely to ruin a day at the beach? Would it be bad fruit, bad dogs, or bad weather? (Probably the latter.)

Despite this, programs that perform text prediction are quite common. You’ve probably encountered one today, in fact, whether that’s Google’s AutoComplete feature or the Predictive Text function in iOS. But these systems are drawing on relatively simple types of language modeling, while algorithms like GPT-2 encode the same information in more complex ways.

The difference between these two approaches is technically arcane, but it can be summed up in a single word: depth. Older methods record information about words in only their most obvious contexts, while newer methods dig deeper into their multiple meanings.

So while a system like Predictive Text only knows that the word “sunny” is used to describe the weather, newer algorithms know when “sunny” is referring to someone’s character or mood, when “Sunny” is a person, or when “Sunny” means the 1976 smash hit by Boney M.

The success of these newer, deeper language models has caused a stir in the AI community. Researcher Sebastian Ruder compares their success to advances made in computer vision in the early 2010s. At this time, deep learning helped algorithms make huge strides in their ability to identify and categorize visual data, kickstarting the current AI boom. Without these advances, a whole range of technologies — from self-driving cars to facial recognition and AI-enhanced photography — would be impossible today. This latest leap in language understanding could have similar, transformational effects.

Hern’s article for the Guardian (February 14, 2019 article ) acts as a good overview, while Gallagher’s ars Technical posting (February 15, 2019 posting) and Vincent’s article (February 14, 2019 article) for the The Verge take you progressively deeper into the world of AI text agents.

For anyone who wants to dig down even further, there’s a February 14, 2019 posting on OpenAI’s blog.

Canada’s Perimeter Institute, graphic novels, physics, and a public webcast

The full name is Perimeter Institute for Theoretical Physics. The abbreviation I’m most familiar with is PI but there’s also Perimeter or PITP according to the institute’s Wikipedia entry. It is the only such institute in the country (as far as I’m aware) and it is very active in science outreach such as their latest foray: Graphic Talk about the Universe: a Clifford V. Johnson public lecture webcast.

A January 16, 2019 posting on the Slice of PI blog (?) announces the webcast,

Physics lends itself to illustration

From da Vinci’s detailed drawings to schematics of a hypothetical zombie cat both alive and dead in a box, illustrations are invaluable tools for those not fluent in the language of equations

But while illustrated textbooks abound, only relatively recently have artists and writers begun exploring physics concepts through the growing genre of graphic novels

These artists (one of whom will deliver a live webcast from Perimeter on Feb. 6!) convey complex ideas not only through illustration, but also narrative creativity, dialogue, action, and humour.

Here are some of our recommendations. Did we miss your favourite? Let us know in the comments.

The Dialogues by Clifford Johnson (MIT Press) is available here.

Max the Demon vs Entropy of Doom by Assa Auerbach and Richard Codor (Loose Line Productions Inc.) is available here


I have two comments about the excerpt from the PI blog: (1) I love the reference to Maxwell’s demon thought experiment in the title for Auerbach’s and Codor’s graphic novel title and (2) Clifford Johnson and his graphic novel were mentioned here in an April 16, 2018 posting.

PI has created a trailer for Johnson’s upcoming webcast,

You can watch the live webcast on February 6, 2019 here (7 pm ET or, for those of us on the West Coast, 4 pm PT). There will be tickets available for anyone who can attend the live lecturre in Waterloo, Ontario. Tickets are available as of Monday, January 21, 2019 at 9 am ET or 6 am PT.

How the technology of writing shaped Roman thought

I have two bits about the Romans: the first is noted in the head for this posting and the second is about a chance to experience a Roman style classroom.

Empire of Letters

This January 8, 2019 news item on phys.org announces a book about how the technology of writing influenced how ancient Romans saw the world and provides a counterpoint to the notion that the ancient world (in Europe) was relentlessly oral in nature,

The Roman poet Lucretius’ epic work “De rerum natura,” or “On the Nature of Things,” is the oldest surviving scientific treatise written in Latin. Composed around 55 B.C.E., the text is a lengthy piece of contrarianism. Lucreutius was in the Epicurean school of philosophy: He wanted an account of the world rooted in earthly matter, rather than explanations based on the Gods and religion

Among other things, Lucretius believed in atomism, the idea that the world and cosmos consisted of minute pieces of matter, rather than four essential elements. To explain this point, Lucretius asked readers to think of bits of matter as being like letters of the alphabet. Indeed, both atoms and letters are called “elementa” in Latin—probably derived from the grouping of L,M, and N in the alphabet

To learn these elements of writing, students would copy out tables of letters and syllables, which Lucretius thought also served as a model for understanding the world, since matter and letters could be rearranged in parallel ways. For instance, Lucretius wrote, wood could be turned into fire by adding a little heat, while the word for wood, “lingum,” could be turned into the world for fire, “ignes,” by altering a few letters.

Students taking this analogy to heart would thus learn “the combinatory potential of nature and language,” says Stephanie Frampton, an associate professor of literature at MIT [Massachusetts Institute of Technology], in a new book on writing in the Roman world.

Moreover, Frampton emphasizes, the fact that students were learning all this specifically through writing exercises is a significant and underappreciated point in our understanding of ancient Rome: Writing, and the tools of writing, helped shape the Roman world.

A January 3, 2019 MIT news release, which originated the news item, expands on the theme,

“Everyone says the ancients are really into spoken and performed poetry, and don’t care about the written word,” Frampton says. “But look at Lucretius, who’s the first person writing a scientific text in Latin — the way that he explains his scientific insight is through this metaphor founded upon the written word.”

Frampton explores this and other connections between writing and Roman society in her new work, “Empire of Letters,” published last week by Oxford University Press [according to their webpage, the paper version will be published on February 4, 2019; the e-book is now available for purchase].

The book is a history of technology itself, as Frampton examines the particulars of Roman books — which often existed as scrolls back then — and their evolution over time. But a central focus of the work is how those technologies influenced how the Romans “thought about thought,” as she says.

Moreover, as Frampton notes, she is studying the history of Romans as “literate creatures,” which means studying the tools of writing used not just in completed works, but in education, too. The letter tables detailed by Lucretius are just one example of this. Romans also learned to read and write using wax tablets that they could wipe clean after exercises.

The need to wipe such tablets clean drove the Roman emphasis on learning the art of memory — including the “memory palace” method, which uses visualized locations for items to remember them, and which is still around today. For this reason Cicero, among other Roman writers, called memory and writing “most similar, though in a different medium.”
As Frampton writes in the book, such tablets also produced “an intimate and complex relationship with memory” in the Roman world, and meant that “memory was a fundamental part of literary composition.”  

Tablets also became a common Roman metaphor for how our brains work: They thought “the mind is like a wax tablet where you can write and erase and rewrite,” Frampton says. Understanding this kind of relationship between technology and the intellect, she thinks, helps us get that much closer to life as the Romans lived it

“I think it’s analagous to early computing,” Frampton says. “The way we talk about the mind now is that it’s a computer. … We think about the computer in the same way that [intellectuals] in Rome were thinking about writing on wax tablets.”

As Frampton discusses in the book, she believes the Romans did produce a number of physical innovations to the typical scroll-based back of the classic world, including changes in layout, format, coloring pigments, and possibly even book covers and the materials used as scroll handles, including ivory.

“The Romans were engineers, that’s [one thing] they were famous for,” Frampton says. “They are quite interesting and innovative in material culture.”

Looking beyond “Empire of Letters” itself, Frampton will co-teach an MIT undergraduate course in 2019, “Making Books,” that looks at the history of the book and gets students to use old technologies to produce books as they were once made. While that course has previously focused on printing-press technology, Frampton will help students go back even further in time, to the days of the scroll and codex, if they wish. All these reading devices, after all, were important innovations in their day.

“I’m working on old media,” Frampton says, “But those old media were once new.” [emphasis mine]

While the technologies Carolyn Marvin was writing about were not quite as old Frampton’s, she too noted the point about old and new technology in her 1990 book “When Old Technologies Were New” published by the Oxford University Press in 1990.

Getting back to Frampton, she has founded an organization known as the Materia Network, which is focused on (from @materianetwork’s Twitter description) “New Approaches to Material Text in the Roman World is a conference series and network for scholars of books and writing in Classical antiquity.”

You can find Materia here. They do have a Call for Proposals but I believe the deadline should read: December 20, 2018 (not 2019) since the conference will be held in April 2019).

Also, you can purchase the ebook or print version of Frampton’s Empire of Letters from the Oxford University Press here.

I have a couple of final comments. (1) The grand daddy of oral and literate culture discussion is Walter J. Ong and I’m referring specifically to his 1982 book, Orality and Literacy. BTW, in addition to being a English Literature professor, the man was a Jesuit priest.

Reading Ancient Schoolroom

(2) The University of Reading (UK) has organized over the last few years, although they skipped in 2018, a series of events known as Reading Ancient Schoolroom (my August 9, 2018 posting features the ‘schoolroom’). The 2019 event is taking place January 23 – 25, 2019. You can find out more about the 2019 opportunity here. For anyone who can’t get to the UK easily, here’s a video of the Reading Ancient Schoolroom,

According to the description on YouTube,

UniofReading

Published on Feb 22, 2018

The Reading Ancient Schoolroom is a historically accurate reconstruction of an ancient schoolroom. It gives modern children an immersive experience of antiquity, acting the part of ancient children, wearing their clothes and using their writing equipment. It was developed by Eleanor Dickey at the University of Reading. Find out more at: www.readingancientschoolroom.com

There you have it.

Metcalf Institute Science Immersion Fellowship 2019 for journalists: applications open

I received this January 4, 2018 announcement from the Metcalf Institute at the University of Rhode Island (URI; US) in my email this morning. In other words, this is fresh off the email,

Get Science Tools to Break Stories
About Global Change & Water Resources

Apply for Metcalf Institute’s Career-Changing Science Immersion Fellowship
tuition, room and board, and travel support included

Global Change Impacts and Water
According to the United Nations, water is the “primary medium through which we will feel the effects of climate change” and water scarcity alone affects nearly half the global population.

Do you have the science tools to make the connection between shrinking water supplies, water quality, food production and climate change? Are you looking for story ideas to convey these global change impacts to your news audience? Would you like to build your confidence in discerning the credibility of scientific sources?

Call for Applications
The University of Rhode Island’s Metcalf Institute is accepting applications for its 21th Annual Science Immersion Workshop for Journalists, June 2-7, 2019. Ten journalists will be awarded Workshop fellowships, which include tuition, room and board, and travel support, thanks to the generosity of private donors and Metcalf Institute’s endowment. Two of the ten slots will be awarded to journalists based outside of the U.S.

Apply for the Workshop here.

About the Workshop
The Metcalf Institute Annual Science Immersion Workshop provides professional journalists with hands-on experience in field and laboratory science with expertise from leading scientists and policymakers who are working to project the impacts of global change, identify adaptation measures, and investigate the most effective ways to communicate these challenges. The workshop will address water resource and climate change topics of global significance while focusing on local and regional case studies in and around Narragansett Bay, among the world’s best studied estuaries. Held at the URI Graduate School of Oceanography, one of the nation’s premier oceanographic research institutions, the Metcalf Workshop provides an intense week of learning in the field, classroom and lecture hall.

Metcalf Fellows will:

  • Receive a comprehensive overview of climate science and global change
  • Gain a deeper understanding of how scientists conduct research and handle scientific uncertainty
  • Develop the skills and confidence to interpret and translate the language of scientific journals for news audiences
  • Build confidence in their abilities to discern the credibility of scientific sources
  • Board a research vessel to study the impacts of rising water temperatures and ocean acidification on coastal ecosystems
  • Explore the study of “emerging contaminants” such as PFAS that affect freshwater and marine ecosystems and public health
  • Visit wetlands, shorelines, and coastal communities to better understand adaptive management efforts and solutions in response to sea level rise and coastal storms
  • Discover new ways to write about global change to build audience understanding and engagement
  • Cultivate new sources by interacting with leading researchers and policy experts in an informal, off-deadline atmosphere
  • Network and develop lasting relationships with journalists from around the globe

Metcalf Alumni
Metcalf Institute has helped nearly 900 journalists cover the environment with greater accuracy and nuance since its first program in 1999. Metcalf alumni represent all media types and a wide variety of large and small news organizations ranging from local and regional newspapers and broadcast outlets to online and national/international outlets such as the Los Angeles Times, Reuters, National Geographic, China Global Television Network, Marketplace, Politico and PBS NewsHour. Metcalf Institute alumni hail from the U.S. and around the world, including Pakistan, Brazil, Nigeria, Israel, Egypt, Italy, South Africa, and China.

“This experience has changed my entire outlook on covering the environment and climate science. I may have only been in Rhode Island for a week, but the tools I gained during my Metcalf fellowship will stay with me for the entirety of my career.” Tony Briscoe, Chicago Tribune reporter and 2018 Annual Workshop alumnus.
                                                    
“Metcalf has greatly enhanced my ability to break down complex issues for my audience. Not only am I headed back home with a bunch of great story ideas, but the ability to set them against an international background and draw broader connections between issues in my region and the rest of the world.” Tegan Wendland, New Orleans Public Radio interim news director, lead coastal reporter, and 2017 Annual Workshop alumna.

Note for journalists applying from outside of the U.S.
While the Workshop addresses environmental topics of global significance, it focuses on U.S. case studies and a U.S. perspective on environmental policies. Metcalf Institute receives applications from journalists worldwide. However, due to funding limitations, only two of the ten fellowships will be awarded to journalists based outside of the U.S.

Eligibility
The Fellowship is designed for early- to mid-career, full-time journalists from all media who are looking to start or expand their coverage of the environment. Applicants must demonstrate a clear need for scientific training in topics relating to global change in coastal environments, specifically related to climate change and water resources. The fellowship includes room, board, tuition, and travel reimbursement paid after the program in the amount of up to US$500 for U.S.-based journalists and up to US$1,000 for journalists based outside of the U.S. Journalists applying from outside the U.S. must provide written assurance that they have full travel funds and can obtain the appropriate visa. Applications for the 2019 Annual Science Immersion Workshop for Journalists must be submitted by February 18, 2019.

Apply for the Workshop here.

About Metcalf Institute
Metcalf Institute is a global leader in environmental science training for journalists and communication training for scientists and other science communicators, as well as provider of science resources for journalists and free public programs and webinars on environmental topics. Metcalf Institute was established at the University of Rhode Island’s Graduate School of Oceanography in 1997 with funding from three media foundations: the Belo Corporation, the Providence Journal Charitable Foundation and the Philip L. Graham Fund, with additional support from the Telaka Foundation. In 2017, the Institute joined the URI College of the Environment and Life Sciences.

Metcalf Institute Funding
Metcalf programming is underwritten by federal and foundation grants, as well as donations from individuals and an endowment managed by the University of Rhode Island Foundation.

I headed off to the Metcalf Institute Fellowship application webpage and decided to include a few details here,

Email Stamp/Postmark Deadline: February 18, 2019

This application has two parts – an online form and a set of required inclusions. Read thoroughly before applying for the workshop below.

Application packages (online form and required enclosures together) will be used to evaluate applicants. Any hard copies of application materials will not be returned.

All application forms and required enclosures must be submitted in English. Application packages in languages other than English will not be reviewed.

Application packages that are not complete by midnight, U.S. Eastern Time, on February 18, 2019, will not be reviewed. …

Good luck!

ArtSci Salon visits the Society for Literature, Science & the Arts 2018 Meeting in Toronto (Canada) while Vancouver’s Curiosity Collider provides a November 2018 update

I have two art/sci (or sciart) announcements, one concerns an event and the other is a news update.

Toronto’s ArtSci Salon and the Society of Literature, Science & the Arts (SLSA) 2018 Meeting

How could I not have stumbled across SLSA until now? Better late than never but the 2018 meeting/conference in Toronto, Canada is the 32nd of this annual event. (sigh)

Getting on to the important points, the ArtSci Salon is hosting a special roundtable as part of the conference (from a November 14, 2018 announcement received via email),

ArtSci Salon has organized a special roundtable at the annual SLSA
(Society for Science Literature and the Arts) which will take place in
Toronto this week.

The roundtable is public and will be held at OCADU [Ontario College of Art and Design University] in the gallery on 49 McCaul Street.

Re-locating the rational: on the re-making of categories through art and science (or: the artist is out of mind!)

A roundtable and a mobile/pop-up exhibition organized by ArtSci Salon

The world always exceeds our conception of it (Shotwell, 2016)

Coinciding with this year’s SLSA “Out of Mind” conference in Toronto, ArtSci Salon is proposing a panel/roundtable on “out-of-mindedness” as a way to re-think categories, and to disrupt the disciplinary and methodological status quo through which we normally see science and the humanities unfolding in academic contexts. We plan to do it through a pop-up exhibition featuring the works of local artists and members of SLSA.

What to do when the sciences and the humanities loose [sic] their ability to fully grasp, and sometimes even speak of, phenomena that have inevitably become too complex, too diffuse to be simplified through a model or a formula, or to be seized and summarized by one discipline?

This initiative is not designed to propose a set of new categories, but to pose a series of open questions, highlighting the necessity to conduct collaborative research between artistic practices and scientific research. We interpret the idea of “out of mind” as a strategy. In fact, using the arts as our preferred mode of expression, we believe that we ought to step out of the traditional mind configurations and fixed wiring in order to seize new ways to come to term with the multiplicities characterizing current environmental transformations. These occurrences have proved to be connected with nature, culture, and society in too many intricate ways, to the extent that neither science, nor technological methods are able to fully comprehend them.

Roundtable Participants:

Roberta Buiani (Chair)

Erika Biddles

Jenifer Wightman

Stephanie Rothenberg

Adam Zaretsky

Kathy High

Dolores Steinman

Here’s the poster:

One more logistical detail,

[T]he roundtable will be at 10:30-12:00 noon [Friday, November 16, 2018] followed by a small tour of the mobile pop-up exhibition[.]

For the curious, here’s the SLSA website and the SLSA 2018 [Meeting]—Out of Your Mind website. Unexpectedly, the University of Toronto is not one of the conference hosts, instead we have the University of Waterloo [Waterloo, Ontario] and York University [Toronto, Ontario] as joint hosts with OCAD University—Canada’s oldest art and design institution—partnering with the Rochester Institute of Technology (New York state, US).

Vancouver’s Curiosity Collider

Coincidentally on the same day I received the ArtSci Salon event information, I received a November 14, 2018 update for Vancouver’s art/sci (or sciart) organization, Curiosity Collider. From the update received via email,

Collider Update

Next events (save-the-date), call for submissions, and other art+science in town

Collisions Festival:
Meet Up & Hang Out

Are you an artist working in the sci/art genre? A scientist interested in collaborating with artists? Or one who wears both hats?

In the fall of 2019, the Curiosity Collider will be hosting our inaugural Sci-Art festival The Collisions Festival; the first theme will be Invasive Systems. The call for submission will be open in spring, 2019. The theme is meant to be broad in scope and not limited to any specific scientific subject/discipline; participants are encouraged to suggest various interpretation of the theme.

We would like to invite all artists and scientists who are interested in participating or potentially submitting a proposal to join us at this meet up event, chat about possible collaborations, and learn more about projects and details on “collaborative work” proposals we are looking for.

RSVP now so we know how many to expect.. This is a casual drop in event; feel free to stay, or just stop by and say hi!

Notice that RSVP? Taken with the next announcement, something becomes evident,

Join the Collider Team!

Are you passionate about art and science? Want to be part of the awesome Curiosity Collider team to help create new ways to experience science? 

We are now inviting applications for the following positions:

Read more on our volunteer page. Feel free to contact us if you have any questions!

In the old days a ‘development director’ was a ‘fundraiser’. That RSVP? Likely, they’re trying to establish the size of their potential audience so they can get government grants. Audience size is important to corporate or ‘other’ funders but if you want a government grant you need numbers.

Getting back to the update, this is a grouping of Curiosity Collider’s latest hits,

#ColliderCafe: Art. Science. Cadence.

Did you miss our most recent Collider Cafe event? You can now chek out the talks by Singer-songwriter Devon More, Biologists Wayne Maddison and David Maddison, as well as Integrated Media Artist Victoria Gibson on our YouTube Channel.

Check out the talks now.

Et al 3: Collaboration Process for Quantum Futures

Nerd Nite, Science Slam, and Curiosity Collider joined forces for the 3rd edition of Et al: the ultimate bar science night event. During the event, Quantum Physicist James Day and our Creative Managing Director Char Hoyt gave attendees an overview of the collaboration process that made Night shift: Quantum Futures, an event curated by CC and hosted at the Museum of Anthropology, possible.

Missed the show? Watch the presentation on our YouTube channel now.

While they don’t seem to have any details, there is a date for the next Collider Cafe,

Save the Date:
Next Collider Cafe

Our next Collider Cafe will be on Wednesday, January 23 at Cafe Deux Soleils. #ColliderCafe is a space for artists, scientists, makers, and anyone interested in art+science. Meet. Discover. Connect. Create.

Are you curious? Join us to explore how art and science intersect in the exploration of curiosity.

Finally, a miscellaneous listing of related events being held in Vancouver, mostly, this November,

Looking for more art+science in town?

  • November 17 (Victoria) Science Writers and Communicators of Canada is hosting a workshop on science writing in an age of reconciliation: What science writers can learn from indigenous community members about better representation and relationships. Only a few spots left! Register now.
  • November 15-18 CC friend Dzee Louise will open her studio during the East Side Cultural Crawl! Drop by at studio #5 just at the top of the stairs of the William Clark Building at 1310 William Street (on the corner of Clark).
  • November 21 Natural History (Paleoart) Illustrator Julius Csotonyi will present a public lecture at the Vancouver Public Library (Kits branch) on the mutually beneficial affair between science and art.
  • November 21 Our friends at Nerd Nite Vancouver is hosting another awesome event next week, including a presentation by artist Michael Markowsky who will talk about how he ends up “Painting on the Moon”. Get your tickets now!
  • Until December 15 Vancouver Biennale’s CURIOUS IMAGININGS continues…check out the exhibition that will “challenge us to explore the social impacts of emerging biotechnology and our ethical limits in an age where genetic engineering and digital technologies are already pushing the boundaries of humanity.”

For more Vancouver art+science events, visit the Curiosity Collider events calendar. Let us know about your art+science events by emailing info@curiositycollider.org.

I did write a preview (June 18, 2018) for the last event on the list, Curious Imaginings, which included some of the latest science on xenotransplantation and chimeras (i.e., examples of  emerging biotechnology). That’s all folks!

AI fairytale and April 25, 2018 AI event at Canada Science and Technology Museum*** in Ottawa

These days it’s all about artificial intelligence (AI) or robots and often, it’s both. They’re everywhere and they will take everyone’s jobs, or not, depending on how you view them. Today, I’ve got two artificial intelligence items, the first of which may provoke writers’ anxieties.

Fairytales

The Princess and the Fox is a new fairytale by the Brothers Grimm or rather, their artificially intelligent surrogate according to an April 18, 2018 article on the British Broadcasting Corporation’s online news website,

It was recently reported that the meditation app Calm had published a “new” fairytale by the Brothers Grimm.

However, The Princess and the Fox was written not by the brothers, who died over 150 years ago, but by humans using an artificial intelligence (AI) tool.

It’s the first fairy tale written by an AI, claims Calm, and is the result of a collaboration with Botnik Studios – a community of writers, artists and developers. Calm says the technique could be referred to as “literary cloning”.

Botnik employees used a predictive-text program to generate words and phrases that might be found in the original Grimm fairytales. Human writers then pieced together sentences to form “the rough shape of a story”, according to Jamie Brew, chief executive of Botnik.

The full version is available to paying customers of Calm, but here’s a short extract:

“Once upon a time, there was a golden horse with a golden saddle and a beautiful purple flower in its hair. The horse would carry the flower to the village where the princess danced for joy at the thought of looking so beautiful and good.

Advertising for a meditation app?

Of course, it’s advertising and it’s ‘smart’ advertising (wordplay intended). Here’s a preview/trailer,

Blair Marnell’s April 18, 2018 article for SyFy Wire provides a bit more detail,

“You might call it a form of literary cloning,” said Calm co-founder Michael Acton Smith. Calm commissioned Botnik to use its predictive text program, Voicebox, to create a new Brothers Grimm story. But first, Voicebox was given the entire collected works of the Brothers Grimm to analyze, before it suggested phrases and sentences based upon those stories. Of course, human writers gave the program an assist when it came to laying out the plot. …

“The Brothers Grimm definitely have a reputation for darkness and many of their best-known tales are undoubtedly scary,” Peter Freedman told SYFY WIRE. Freedman is a spokesperson for Calm who was a part of the team behind the creation of this story. “In the process of machine-human collaboration that generated The Princess and The Fox, we did gently steer the story towards something with a more soothing, calm plot and vibe, that would make it work both as a new Grimm fairy tale and simultaneously as a Sleep Story on Calm.” [emphasis mine]

….

If Marnell’s article is to be believed, Peter Freedman doesn’t hold much hope for writers in the long-term future although we don’t need to start ‘battening down the hatches’ yet.

You can find Calm here.

You can find Botnik  here and Botnik Studios here.

 

AI at Ingenium [Canada Science and Technology Museum] on April 25, 2018

Formerly known (I believe) [*Read the comments for the clarification] as the Canada Science and Technology Museum, Ingenium is hosting a ‘sold out but there will be a livestream’ Google event. From Ingenium’s ‘Curiosity on Stage Evening Edition with Google – The AI Revolution‘ event page,

Join Google, Inc. and the Canada Science and Technology Museum for an evening of thought-provoking discussions about artificial intelligence.

[April 25, 2018
7:00 p.m. – 10:00 p.m. {ET}
Fees: Free]

Invited speakers from industry leaders Google, Facebook, Element AI and Deepmind will explore the intersection of artificial intelligence with robotics, arts, social impact and healthcare. The session will end with a panel discussion and question-and-answer period. Following the event, there will be a reception along with light refreshments and networking opportunities.

The event will be simultaneously translated into both official languages as well as available via livestream from the Museum’s YouTube channel.

Seating is limited

THIS EVENT IS NOW SOLD OUT. Please join us for the livestream from the Museum’s YouTube channel. https://www.youtube.com/cstmweb *** April 25, 2018: I received corrective information about the link for the livestream: https://youtu.be/jG84BIno5J4 from someone at Ingenium.***

Speakers

David Usher (Moderator)

David Usher is an artist, best-selling author, entrepreneur and keynote speaker. As a musician he has sold more than 1.4 million albums, won 4 Junos and has had #1 singles singing in English, French and Thai. When David is not making music, he is equally passionate about his other life, as a Geek. He is the founder of Reimagine AI, an artificial intelligence creative studio working at the intersection of art and artificial intelligence. David is also the founder and creative director of the non-profit, the Human Impact Lab at Concordia University [located in Montréal, Québec]. The Lab uses interactive storytelling to revisualize the story of climate change. David is the co-creator, with Dr. Damon Matthews, of the Climate Clock. Climate Clock has been presented all over the world including the United Nations COP 23 Climate Conference and is presently on a three-year tour with the Canada Museum of Science and Innovation’s Climate Change Exhibit.

Joelle Pineau (Facebook)

The AI Revolution:  From Ideas and Models to Building Smart Robots
Joelle Pineau is head of the Facebook AI Research Lab Montreal, and an Associate Professor and William Dawson Scholar at McGill University. Dr. Pineau’s research focuses on developing new models and algorithms for automatic planning and learning in partially-observable domains. She also applies these algorithms to complex problems in robotics, health-care, games and conversational agents. She serves on the editorial board of the Journal of Artificial Intelligence Research and the Journal of Machine Learning Research and is currently President of the International Machine Learning Society. She is a AAAI Fellow, a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR) and in 2016 was named a member of the College of New Scholars, Artists and Scientists by the Royal Society of Canada.

Pablo Samuel Castro (Google)

Building an Intelligent Assistant for Music Creators
Pablo was born and raised in Quito, Ecuador, and moved to Montreal after high school to study at McGill. He stayed in Montreal for the next 10 years, finished his bachelors, worked at a flight simulator company, and then eventually obtained his masters and PhD at McGill, focusing on Reinforcement Learning. After his PhD Pablo did a 10-month postdoc in Paris before moving to Pittsburgh to join Google. He has worked at Google for almost 6 years, and is currently a research Software Engineer in Google Brain in Montreal, focusing on fundamental Reinforcement Learning research, as well as Machine Learning and Music. Aside from his interest in coding/AI/math, Pablo is an active musician (https://www.psctrio.com), loves running (5 marathons so far, including Boston!), and discussing politics and activism.

Philippe Beaudoin (Element AI)

Concrete AI-for-Good initiatives at Element AI
Philippe cofounded Element AI in 2016 and currently leads its applied lab and AI-for-Good initiatives. His team has helped tackle some of the biggest and most interesting business challenges using machine learning. Philippe holds a Ph.D in Computer Science and taught virtual bipeds to walk by themselves during his postdoc at UBC. He spent five years at Google as a Senior Developer and Technical Lead Manager, partly with the Chrome Machine Learning team. Philippe also founded ArcBees, specializing in cloud-based development. Prior to that he worked in the videogame and graphics hardware industries. When he has some free time, Philippe likes to invent new boardgames — the kind of games where he can still beat the AI!

Doina Precup (Deepmind)

Challenges and opportunities for the AI revolution in health care
Doina Precup splits her time between McGill University, where she co-directs the Reasoning and Learning Lab in the School of Computer Science, and DeepMind Montreal, where she leads the newly formed research team since October 2017.  She got her BSc degree in computer science form the Technical University Cluj-Napoca, Romania, and her MSc and PhD degrees from the University of Massachusetts-Amherst, where she was a Fulbright fellow. Her research interests are in the areas of reinforcement learning, deep learning, time series analysis, and diverse applications of machine learning in health care, automated control and other fields. She became a senior member of AAAI in 2015, a Canada Research Chair in Machine Learning in 2016 and a Senior Fellow of CIFAR in 2017.

Interesting, oui? Not a single expert from Ottawa or Toronto. Well, Element AI has an office in Toronto. Still, I wonder why this singular focus on AI in Montréal. After all, one of the current darlings of AI, machine learning, was developed at the University of Toronto which houses the Canadian Institute for Advanced Research (CIFAR),  the institution in charge of the Pan-Canadian Artificial Intelligence Strategy and the Vector Institutes (more about that in my March 31,2017 posting).

Enough with my musing: For those of us on the West Coast, there’s an opportunity to attend via livestream from 4 pm to 7 pm on April 25, 2018 on xxxxxxxxx. *** April 25, 2018: I received corrective information about the link for the livestream: https://youtu.be/jG84BIno5J4 and clarification as the relationship between Ingenium and the Canada Science and Technology Museum from someone at Ingenium.***

For more about Element AI, go here; for more about DeepMind, go here for information about parent company in the UK and the most I dug up about their Montréal office was this job posting; and, finally , Reimagine.AI is here.

Putting science back into pop culture and selling books

Clifford V. Johnson is very good at promoting books. I tip my hat to him; that’s an excellent talent to have, especially when you’ve written a book, in his case, it’s a graphic novel titled ‘The Dialogues: Conversations about the Nature of the Universe‘.

I first stumbled across professor (University of Southern California) and physicist Johnson and his work in this January 18, 2018 news item on phys.org,

How often do you, outside the requirements of an assignment, ponder things like the workings of a distant star, the innards of your phone camera, or the number and layout of petals on a flower? Maybe a little bit, maybe never. Too often, people regard science as sitting outside the general culture: A specialized, difficult topic carried out by somewhat strange people with arcane talents. It’s somehow not for them.

But really science is part of the wonderful tapestry of human culture, intertwined with things like art, music, theater, film and even religion. These elements of our culture help us understand and celebrate our place in the universe, navigate it and be in dialogue with it and each other. Everyone should be able to engage freely in whichever parts of the general culture they choose, from going to a show or humming a tune to talking about a new movie over dinner.

Science, though, gets portrayed as opposite to art, intuition and mystery, as though knowing in detail how that flower works somehow undermines its beauty. As a practicing physicist, I disagree. Science can enhance our appreciation of the world around us. It should be part of our general culture, accessible to all. Those “special talents” required in order to engage with and even contribute to science are present in all of us.

Here’s more his January 18, 2018 essay on The Conversation (which was the origin for the news item), Note: Links have been removed,

… in addition to being a professor, I work as a science advisor for various forms of entertainment, from blockbuster movies like the recent “Thor: Ragnarok,” or last spring’s 10-hour TV dramatization of the life and work of Albert Einstein (“Genius,” on National Geographic), to the bestselling novel “Dark Matter,” by Blake Crouch. People spend a lot of time consuming entertainment simply because they love stories like these, so it makes sense to put some science in there.

Science can actually help make storytelling more entertaining, engaging and fun – as I explain to entertainment professionals every chance I get. From their perspective, they get potentially bigger audiences. But good stories, enhanced by science, also spark valuable conversations about the subject that continue beyond the movie theater.
Science can be one of the topics woven into the entertainment we consume – via stories, settings and characters. ABC Television

Nonprofit organizations have been working hard on this mission. The Alfred P. Sloan Foundation helps fund and develop films with science content – “The Man Who Knew Infinity” (2015) and “Robot & Frank” (2012) are two examples. (The Sloan Foundation is also a funding partner of The Conversation US.)

The National Academy of Sciences set up the Science & Entertainment Exchange to help connect people from the entertainment industry to scientists. The idea is that such experts can provide Hollywood with engaging details and help with more accurate portrayals of scientists that can enhance the narratives they tell. Many of the popular Marvel movies – including “Thor” (2011), “Ant-Man” (2015) and the upcoming “Avengers: Infinity War” – have had their content strengthened in this way.

Encouragingly, a recent Pew Research Center survey in the U.S. showed that entertainment with science or related content is watched by people across “all demographic, educational and political groups,” and that overall they report positive impressions of the science ideas and scenarios contained in them.

Many years ago I realized it is hard to find books on the nonfiction science shelf that let readers see themselves as part of the conversation about science. So I envisioned an entire book of conversations about science taking place between ordinary people. While “eavesdropping” on those conversations, readers learn some science ideas, and are implicitly invited to have conversations of their own. It’s a resurrection of the dialogue form, known to the ancient Greeks, and to Galileo, as a device for exchanging ideas, but with contemporary settings: cafes, restaurants, trains and so on.

Clifford Johnson at his drafting table. Clifford V. Johnson, CC BY-ND

So over six years I taught myself the requisite artistic and other production techniques, and studied the language and craft of graphic narratives. I wrote and drew “The Dialogues: Conversations About the Nature of the Universe” as proof of concept: A new kind of nonfiction science book that can inspire more people to engage in their own conversations about science, and celebrate a spirit of plurality in everyday science participation.

I so enjoyed Johnson’s writing and appreciated how he introduced his book into the piece that I searched for more and found a three-part interview with Henry Jenkins on his Confessions of an Aca-Fan (Academic-Fan) blog. Before moving onto the interview, here’s some information about the interviewer, Henry Jenkins, (Note: Links have been removed),

Henry Jenkins is the Provost Professor of Communication, Journalism, Cinematic Arts and Education at the University of Southern California. He arrived at USC in Fall 2009 after spending more than a decade as the Director of the MIT Comparative Media Studies Program and the Peter de Florez Professor of Humanities. He is the author and/or editor of seventeen books on various aspects of media and popular culture, including Textual Poachers: Television Fans and Participatory Culture, Hop on Pop: The Politics and Pleasures of Popular Culture,  From Barbie to Mortal Kombat: Gender and Computer Games, Convergence Culture: Where Old and New Media Collide, Spreadable Media: Creating Meaning and Value in a Networked Culture, and By Any Media Necessary: The New Youth Activism. He is currently editing a handbook on the civic imagination and writing a book on “comics and stuff”. He has written for Technology Review, Computer Games, Salon, and The Huffington Post.

Jenkins is the principal investigator for The Civic Imagination Project, funded by the MacArthur Foundation, to explore ways to inspire creative collaborations within communities as they work together to identify shared values and visions for the future. This project grew out of the Media, Activism, and Participatory Politics research group, also funded by MacArthur, which did case studies of innovative organizations that have been effective at getting young people involved in the political process. He is also the Chief Advisor to the Annenberg Innovation Lab. Jenkins also serves on the jury that selects the Peabody Awards, which recognizes “stories that matter” from radio, television, and the web.

He has previously worked as the principal investigator for  Project New Media Literacies (NML), a group which originated as part of the MacArthur Digital Media and Learning Initiative. Jenkins wrote a white paper on learning in a participatory culture that has become the springboard for the group’s efforts to develop and test educational materials focused on preparing students for engagement with the new media landscape. He also was the founder for the Convergence Culture Consortium, a faculty network which seeks to build bridges between academic researchers and the media industry in order to help inform the rethinking of consumer relations in an age of participatory culture.  The Consortium lives on today via the Transforming Hollywood conference, run jointly between USC and UCLA, which recently hosted its 8th event.  

While at MIT, he was one of the principal investigators for The Education Arcade, a consortium of educators and business leaders working to promote the educational use of computer and video games. Jenkins also plays a significant role as a public advocate for fans, gamers and bloggers: testifying before the U.S. Senate Commerce Committee investigation into “Marketing Violence to Youth” following the Columbine shootings; advocating for media literacy education before the Federal Communications Commission; calling for a more consumer-oriented approach to intellectual property at a closed door meeting of the governing body of the World Economic Forum; signing amicus briefs in opposition to games censorship;  regularly speaking to the press and other media about aspects of media change and popular culture; and most recently, serving as an expert witness in the legal struggle over the fan-made film, Prelude to Axanar.  He also has served as a consultant on the Amazon children’s series Lost in Oz, where he provided insights on world-building and transmedia strategies as well as new media literacy issues.

Jenkins has a B.A. in Political Science and Journalism from Georgia State University, a M.A. in Communication Studies from the University of Iowa and a PhD in Communication Arts from the University of Wisconsin-Madison.

Well, that didn’t seem so simple after all. For a somewhat more personal account of who I am, read on.

About Me

The first thing you are going to discover about me, oh reader of this blog, is that I am prolific as hell. The second is that I am also long-winded as all get out. As someone famous once said, “I would have written it shorter, but I didn’t have enough time.”

My earliest work centered on television fans – particularly science fiction fans. Part of what drew me into graduate school in media studies was a fascination with popular culture. I grew up reading Mad magazine and Famous Monsters of Filmland – and, much as my parents feared, it warped me for life. Early on, I discovered the joys of comic books and science fiction, spent time playing around with monster makeup, started writing scripts for my own Super 8 movies (The big problem was that I didn’t have access to a camera until much later), and collecting television-themed toys. By the time I went to college, I was regularly attending science fiction conventions. Through the woman who would become my wife, I discovered fan fiction. And we spent a great deal of time debating our very different ways of reading our favorite television series.

When I got to graduate school, I was struck by how impoverished the academic framework for thinking about media spectatorship was – basically, though everyone framed it differently, consumers were assumed to be passive, brainless, inarticulate, and brainwashed. None of this jelled well with my own robust experience of being a fan of popular culture. I was lucky enough to get to study under John Fiske, first at Iowa and then at the University of Wisconsin-Madison, who introduced me to the cultural studies perspective. Fiske was a key advocate of ethnographic audience research, arguing that media consumers had more tricks up their sleeves than most academic theory acknowledged.

Out of this tension between academic theory and fan experience emerged first an essay, “Star Trek Reread, Rerun, Rewritten” and then a book, Textual Poachers: Television Fans and Participatory Culture. Textual Poachers emerged at a moment when fans were still largely marginal to the way mass media was produced and consumed, and still hidden from the view of most “average consumers.” As such, the book represented a radically different way of thinking about how one might live in relation to media texts. In the book, I describe fans as “rogue readers.” What most people took from that book was my concept of “poaching,” the idea that fans construct their own culture – fan fiction, artwork, costumes, music and videos – from content appropriated from mass media, reshaping it to serve their own needs and interests. There are two other key concepts in this early work which takes on greater significance in my work today – the idea of participatory culture (which runs throughout Convergence Culture) and the idea of a moral economy (that is, the presumed ethical norms which govern the relations between media producers and consumers).

As for the interview, here’s Jenkins’ introduction to the series and a portion of part one (from Comics and Popular Science: An Interview with Clifford V. Johnson (Part One) posted on November 15, 2017),

unnamed.jpg

Clifford V. Johnson is the first theoretical physicist who I have ever interviewed for my blog. Given the sharp divide that our society constructs between the sciences and the humanities, he may well be the last, but he would be the first to see this gap as tragic, a consequence of the current configuration of disciplines. Johnson, as I have discovered, is deeply committed to helping us recognize the role that science plays in everyday life, a project he pursues actively through his involvement as one of the leaders of the Los Angeles Institute for the Humanities (of which I am also a member), as a consultant on various film and television projects, and now, as the author of a graphic novel, The Dialogues, which is being released this week. We were both on a panel about contemporary graphic storytelling Tara McPherson organized for the USC Sydney Harmon Institute for Polymathic Study and we’ve continued to bat around ideas about the pedagogical potential of comics ever since.

Here’s what I wrote when I was asked to provide a blurb for his new book:

“Two superheroes walk into a natural history museum — what happens after that will have you thinking and talking for a long time to come. Clifford V. Johnson’s The Dialogues joins a select few examples of recent texts, such as Scott McCloud’s Understanding Comics, Larry Gonick’s Cartoon History of the Universe, Nick Sousanis’s Unflattening, Bryan Talbot’s Alice in Sunderland, or Joe Sacco’s Palestine, which use the affordances of graphic storytelling as pedagogical tools for changing the ways we think about the world around us. Johnson displays a solid grasp of the craft of comics, demonstrating how this medium can be used to represent different understandings of the relationship between time and space, questions central to his native field of physics. He takes advantage of the observational qualities of contemporary graphic novels to explore the place of scientific thinking in our everyday lives.”

To my many readers who care about sequential art, this is a book which should be added to your collection — Johnson makes good comics, smart comics, beautiful comics, and comics which are doing important work, all at the same time. What more do you want!

In the interviews that follows, we explore more fully what motivated this particular comics and how approaching comics as a theoretical physicist has helped him to discover some interesting formal aspects of this medium.

What do you want your readers to learn about science over the course of these exchanges? I am struck by the ways you seek to demystify aspects of the scientific process, including the role of theory, equations, and experimentation.

unnamed-2.jpg

 

That participatory aspect is core, for sure. Conversations about science by random people out there in the world really do happen – I hear them a lot on the subway, or in cafes, and so I wanted to highlight those and celebrate them. So the book becomes a bit of an invitation to everyone to join in. But then I can show so many other things that typically just get left out of books about science: The ordinariness of the settings in which such conversations can take place, the variety of types of people involved, and indeed the main tools, like equations and technical diagrams, that editors usually tell you to leave out for fear of scaring away the audience. …

I looked for book reviews and found two. This first one is from Starburst Magazine, which strangely does not have the date or author listed (from the review),

The Dialogues is a series of nine conversations about science told in graphic novel format; the conversationalists are men, women, children, and amateur science buffs who all have something to say about the nature of the universe. Their discussions range from multiverse and string theory to immortality, black holes, and how it’s possible to put just a cup of rice in the pan but end up with a ton more after Mom cooks it. Johnson (who also illustrated the book) believes the graphic form is especially suited for physics because “one drawing can show what it would take many words to explain” and it’s hard to argue with his noble intentions, but despite some undoubtedly thoughtful content The Dialogues doesn’t really work. Why not? Because, even with its plethora of brightly-coloured pictures, it’s still 200+ pages of talking heads. The individual conversations might give us plenty to think about, but the absence of any genuine action (or even a sense of humour) still makes The Dialogues read like very pretty homework.

Adelmar Bultheel’s December 8, 2017 review for the European Mathematical Society acknowledges issues with the book while noting its strong points,

So what is the point of producing such a graphic novel if the reader is not properly instructed about anything? In my opinion, the true message can be found in the one or two pages of notes that follow each of the eleven conversations. If you are not into the subject that you were eavesdropping, you probably have heard words, concepts, theories, etc. that you did not understand, or you might just be curious about what exactly the two were discussing. Then you should look that up on the web, or if you want to do it properly, you should consult some literature. This is what these notes are providing: they are pointing to the proper books to consult. …

This is a most unusual book for this subject and the way this is approached is most surprising. Not only the contents is heavy stuff, it is also physically heavy to read. Some 250 pages on thick glossy paper makes it a quite heavy book to hold. You probably do not want to read this in bed or take it on a train, unless you have a table in front of you to put it on. Many subjects are mentioned, but not all are explained in detail. The reader should definitely be prepared to do some extra reading to understand things better. Since most references concern other popularising books on the subject, it may require quite a lot of extra reading. But all this hard science is happening in conversations by young enthusiastic people in casual locations and it is all wrapped up in beautiful graphics showing marvellous realistic decors.

I am fascinated by this book which I have yet to read but I did find a trailer for it (from thedialoguesbook.com),

Enjoy!