Category Archives: writing

How AI-designed fiction reading lists and self-publishing help nurture far-right and neo-Nazi novelists

Literary theorists Helen Young and Geoff M Boucher, both at Deakin University (Australia), have co-written a fascinating May 29, 2022 essay on The Conversation (and republished on phys.org) analyzing some of the reasons (e.g., novels) for the resurgence in neo-Nazi activity and far-right extremism, Note: Links have been removed,

Far-right extremists pose an increasing risk in Australia and around the world. In 2020, ASIO [Australian Security Intelligence Organisation] revealed that about 40% of its counter-terrorism work involved the far right.

The recent mass murder in Buffalo, U.S., and the attack in Christchurch, New Zealand, in 2019 are just two examples of many far-right extremist acts of terror.

Far-right extremists have complex and diverse methods for spreading their messages of hate. These can include through social media, video games, wellness culture, interest in medieval European history, and fiction [emphasis mine]. Novels by both extremist and non-extremist authors feature on far-right “reading lists” designed to draw people into their beliefs and normalize hate.

Here’s more about how the books get published and distributed, from the May 29, 2022 essay, Note: Links have been removed,

Publishing houses once refused to print such books, but changes in technology have made traditional publishers less important. With self-publishing and e-books, it is easy for extremists to produce and distribute their fiction.

In this article, we have only given the titles and authors of those books that are already notorious, to avoid publicizing other dangerous hate-filled fictions.

Why would far-right extremists write novels?

Reading fiction is different to reading non-fiction. Fiction offers readers imaginative scenarios that can seem to be truthful, even though they are not fact-based. It can encourage readers to empathize with the emotions, thoughts and ethics of characters, particularly when they recognize those characters as being “like” them.

A novel featuring characters who become radicalized to far-right extremism, or who undertake violent terrorist acts, can help make those things seem justified and normal.

Novels that promote political violence, such as The Turner Diaries, are also ways for extremists to share plans and give readers who hold extreme views ideas about how to commit terrorist acts. …

In the late 20th century, far-right extremists without Pierce’s notoriety [American neo-Nazi William L. Pierce published The Turner Diaries (1978)] found it impossible to get their books published. One complained about this on his blog in 1999, blaming feminists and Jewish people. Just a few years later, print-on-demand and digital self-publishing made it possible to circumvent this difficulty.

The same neo-Nazi self-published what he termed “a lifetime of writing” in the space of a few years in the early 2000s. The company he paid to produce his books—iUniverse.com—helped get them onto the sales lists of major booksellers Barnes and Noble and Amazon in the early 2000s, making a huge difference to how easily they circulated outside extremist circles.

It still produces print-on-demand hard copies, even though the author has died. The same author’s books also circulate in digital versions, including on Google Play and Kindle, making them easily accessible.

Distributing extremist novels digitally

Far-right extremists use social media to spread their beliefs, but other digital platforms are also useful for them.

Seemingly innocent sites that host a wide range of mainstream material, such as Google Books, Project Gutenberg, and the Internet Archive, are open to exploitation. Extremists use them to share, for example, material denying the Holocaust alongside historical Nazi newspapers.

Amazon’s Kindle self-publishing service has been called “a haven for white supremacists” because of how easy it is for them to circulate political tracts there. The far-right extremist who committed the Oslo terrorist attacks in 2011 recommended in his manifesto that his followers use Kindle to to spread his message.

Our research has shown that novels by known far-right extremists have been published and circulated through Kindle as well as other digital self-publishing services.

Ai and its algorithms also play a role, from the May 29, 2022 essay,

Radicalising recommendations

As we researched how novels by known violent extremists circulate, we noticed that the sales algorithms of mainstream platforms were suggesting others that we might also be interested in. Sales algorithms work by recommending items that customers who purchased one book have also viewed or bought.

Those recommendations directed us to an array of novels that, when we investigated them, proved to resonate with far-right ideologies.

A significant number of them were by authors with far-right political views. Some had ties to US militia movements and the gun-obsessed “prepper” subculture. Almost all of the books were self-published as e-books and print-on-demand editions.

Without the marketing and distribution channels of established publishing houses, these books rely on digital circulation for sales, including sale recommendation algorithms.

The trail of sales recommendations led us, with just two clicks, to the novels of mainstream authors. They also led us back again, from mainstream authors’ books to extremist novels. This is deeply troubling. It risks unsuspecting readers being introduced to the ideologies, world-views and sometimes powerful emotional narratives of far-right extremist novels designed to radicalise.

It’s not always easy to tell right away if you’re reading fiction promoting far-right ideologies, from the May 29, 2022 essay,

Recognising far-right messages

Some extremist novels follow the lead of The Turner Diaries and represent the start of a racist, openly genocidal war alongside a call to bring one about. Others are less obvious about their violent messages.

Some are not easily distinguished from mainstream novels – for example, from political thrillers and dystopian adventure stories like those of Tom Clancy or Matthew Reilly – so what is different about them? Openly neo-Nazi authors, like Pierce, often use racist, homophobic and misogynist slurs, but many do not. This may be to help make their books more palatable to general readers, or to avoid digital moderation based on specific words.

Knowing more about far-right extremism can help. Researchers generally say that there are three main things that connect the spectrum of far-right extremist politics: acceptance of social inequality, authoritarianism, and embracing violence as a tool for political change. Willingness to commit or endorse violence is a key factor separating extremism from other radical politics.

It is very unlikely that anyone would become radicalised to violent extremism just by reading novels. Novels can, however, reinforce political messages heard elsewhere (such as on social media) and help make those messages and acts of hate feel justified.

With the growing threat of far-right extremism and deliberate recruitment strategies of extremists targeting unexpected places, it is well worth being informed enough to recognise the hate-filled stories they tell.

I recommend reading the essay as my excerpts don’t do justice to the ideas being presented. As Young and Boucher note, it’s “… unlikely that anyone would become radicalised to violent extremism …” by reading novels but far-right extremists and neo-Nazis write fiction because the tactic works at some level.

AI (artificial intelligence) and art ethics: a debate + a Botto (AI artist) October 2022 exhibition in the Uk

Who is an artist? What is an artist? Can everyone be an artist? These are the kinds of questions you can expect with the rise of artificially intelligent artists/collaborators. Of course, these same questions have been asked many times before the rise of AI (artificial intelligence) agents/programs in the field of visual art. Each time the questions are raised is an opportunity to examine our beliefs from a different perspective. And, not to be forgotten, there are questions about money.

The shock

First, the ‘art’,

The winning work. Colorado State Fair 2022. Screengrab from Discord [downloaded from https://www.artnews.com/art-news/news/colorado-state-fair-ai-generated-artwork-controversy-1234638022/]

Shanti Escalante-De Mattei’s September 1, 2022 article for ArtNews.com provides an overview of the latest AI art controversy (Note: A link has been removed),

The debate around AI art went viral once again when a man won first place at the Colorado State Fair’s art competition in the digital category with a work he made using text-to-image AI generator Midjourney.

Twitter user and digital artist Genel Jumalon tweeted out a screenshot from a Discord channel in which user Sincarnate, aka game designer Jason Allen, celebrated his win at the fair. Jumalon wrote, “Someone entered an art competition with an AI-generated piece and won the first prize. Yeah that’s pretty fucking shitty.”

The comments on the post range from despair and anger as artists, both digital and traditional, worry that their livelihoods might be at stake after years of believing that creative work would be safe from AI-driven automation. [emphasis mine]

Rachel Metz’s September 3, 2022 article for CNN provides more details about how the work was generated (Note: Links have been removed),

Jason M. Allen was almost too nervous to enter his first art competition. Now, his award-winning image is sparking controversy about whether art can be generated by a computer, and what, exactly, it means to be an artist.

In August [2022], Allen, a game designer who lives in Pueblo West, Colorado, won first place in the emerging artist division’s “digital arts/digitally-manipulated photography” category at the Colorado State Fair Fine Arts Competition. His winning image, titled “Théâtre D’opéra Spatial” (French for “Space Opera Theater”), was made with Midjourney — an artificial intelligence system that can produce detailed images when fed written prompts. A $300 prize accompanied his win.

Allen’s winning image looks like a bright, surreal cross between a Renaissance and steampunk painting. It’s one of three such images he entered in the competition. In total, 11 people entered 18 pieces of art in the same category in the emerging artist division.

The definition for the category in which Allen competed states that digital art refers to works that use “digital technology as part of the creative or presentation process.” Allen stated that Midjourney was used to create his image when he entered the contest, he said.

The newness of these tools, how they’re used to produce images, and, in some cases, the gatekeeping for access to some of the most powerful ones has led to debates about whether they can truly make art or assist humans in making art.

This came into sharp focus for Allen not long after his win. Allen had posted excitedly about his win on Midjourney’s Discord server on August 25 [2022], along with pictures of his three entries; it went viral on Twitter days later, with many artists angered by Allen’s win because of his use of AI to create the image, as a story by Vice’s Motherboard reported earlier this week.

“This sucks for the exact same reason we don’t let robots participate in the Olympics,” one Twitter user wrote.

“This is the literal definition of ‘pressed a few buttons to make a digital art piece’,” another Tweeted. “AI artwork is the ‘banana taped to the wall’ of the digital world now.”

Yet while Allen didn’t use a paintbrush to create his winning piece, there was plenty of work involved, he said.

“It’s not like you’re just smashing words together and winning competitions,” he said.

You can feed a phrase like “an oil painting of an angry strawberry” to Midjourney and receive several images from the AI system within seconds, but Allen’s process wasn’t that simple. To get the final three images he entered in the competition, he said, took more than 80 hours.

First, he said, he played around with phrasing that led Midjourney to generate images of women in frilly dresses and space helmets — he was trying to mash up Victorian-style costuming with space themes, he said. Over time, with many slight tweaks to his written prompt (such as to adjust lighting and color harmony), he created 900 iterations of what led to his final three images. He cleaned up those three images in Photoshop, such as by giving one of the female figures in his winning image a head with wavy, dark hair after Midjourney had rendered her headless. Then he ran the images through another software program called Gigapixel AI that can improve resolution and had the images printed on canvas at a local print shop.

Ars Technica has run a number of articles on the subject of Art and AI, Benj Edwards in an August 31, 2022 article seems to have been one of the first to comment on Jason Allen’s win (Note 1: Links have been removed; Note 2: Look at how Edwards identifies Jason Allen as an artist),

A synthetic media artist named Jason Allen entered AI-generated artwork into the Colorado State Fair fine arts competition and announced last week that he won first place in the Digital Arts/Digitally Manipulated Photography category, Vice reported Wednesday [August 31, 2022?] based on a viral tweet.

Allen’s victory prompted lively discussions on Twitter, Reddit, and the Midjourney Discord server about the nature of art and what it means to be an artist. Some commenters think human artistry is doomed thanks to AI and that all artists are destined to be replaced by machines. Others think art will evolve and adapt with new technologies that come along, citing synthesizers in music. It’s a hot debate that Wired covered in July [2022].

It’s worth noting that the invention of the camera in the 1800s prompted similar criticism related to the medium of photography, since the camera seemingly did all the work compared to an artist that labored to craft an artwork by hand with a brush or pencil. Some feared that painters would forever become obsolete with the advent of color photography. In some applications, photography replaced more laborious illustration methods (such as engraving), but human fine art painters are still around today.

Benj Edwards in a September 12, 2022 article for Ars Technica examines how some art communities are responding (Note: Links have been removed),

Confronted with an overwhelming amount of artificial-intelligence-generated artwork flooding in, some online art communities have taken dramatic steps to ban or curb its presence on their sites, including Newgrounds, Inkblot Art, and Fur Affinity, according to Andy Baio of Waxy.org.

Baio, who has been following AI art ethics closely on his blog, first noticed the bans and reported about them on Friday [Sept. 9, 2022?]. …

The arrival of widely available image synthesis models such as Midjourney and Stable Diffusion has provoked an intense online battle between artists who view AI-assisted artwork as a form of theft (more on that below) and artists who enthusiastically embrace the new creative tools.

… a quickly evolving debate about how art communities (and art professionals) can adapt to software that can potentially produce unlimited works of beautiful art at a rate that no human working without the tools could match.

A few weeks ago, some artists began discovering their artwork in the Stable Diffusion data set, and they weren’t happy about it. Charlie Warzel wrote a detailed report about these reactions for The Atlantic last week [September 7, 2022]. With battle lines being drawn firmly in the sand and new AI creativity tools coming out steadily, this debate will likely continue for some time to come.

Filthy lucre becomes more prominent in the conversation

Lizzie O’Leary in a September 12, 2022 article for Fast Company presents a transcript of an interview (from the TBD podcast) she conducted with Drew Harwell, tech reporter covering A.I. for Washington Post) about the ‘Jason Allen’ win,

I’m struck by how quickly these art A.I.s are advancing. DALL-E was released in January of last year and there were some pretty basic images. And then, a year later, DALL-E 2 is using complex, faster methods. Midjourney, the one Jason Allen used, has a feature that allows you to upscale and downscale images. Where is this sudden supply and demand for A.I. art coming from?

You could look back to five years ago when they had these text-to-image generators and the output would be really crude. You could sort of see what the A.I. was trying to get at, but we’ve only really been able to cross that photorealistic uncanny valley in the last year or so. And I think the things that have contributed to that are, one, better data. You’re seeing people invest a lot of money and brainpower and resources into adding more stuff into bigger data sets. We have whole groups that are taking every image they can get on the internet. Billions, billions of images from Pinterest and Amazon and Facebook. You have bigger data sets, so the A.I. is learning more. You also have better computing power, and those are the two ingredients to any good piece of A.I. So now you have A.I. that is not only trained to understand the world a little bit better, but it can now really quickly spit out a very finely detailed generated image.

Is there any way to know, when you look at a piece of A.I. art, what images it referenced to create what it’s doing? Or is it just so vast that you can’t kind of unspool it backward?

When you’re doing an image that’s totally generated out of nowhere, it’s taking bits of information from billions of images. It’s creating it in a much more sophisticated way so that it’s really hard to unspool.

Art generated by A.I. isn’t just a gee-whiz phenomenon, something that wins prizes, or even a fascinating subject for debate—it has valuable commercial uses, too. Some that are a little frightening if you’re, say, a graphic designer.

You’re already starting to see some of these images illustrating news articles, being used as logos for companies, being used in the form of stock art for small businesses and websites. Anything where somebody would’ve gone and paid an illustrator or graphic designer or artist to make something, they can now go to this A.I. and create something in a few seconds that is maybe not perfect, maybe would be beaten by a human in a head-to-head, but is good enough. From a commercial perspective, that’s scary, because we have an industry of people whose whole job is to create images, now running up against A.I.

And the A.I., again, in the last five years, the A.I. has gotten better and better. It’s still not perfect. I don’t think it’ll ever be perfect, whatever that looks like. It processes information in a different, maybe more literal, way than a human. I think human artists will still sort of have the upper hand in being able to imagine things a little more outside of the box. And yet, if you’re just looking for three people in a classroom or a pretty simple logo, you’re going to go to A.I. and you’re going to take potentially a job away from a freelancer whom you would’ve given it to 10 years ago.

I can see a use case here in marketing, in advertising. The A.I. doesn’t need health insurance, it doesn’t need paid vacation days, and I really do wonder about this idea that the A.I. could replace the jobs of visual artists. Do you think that is a legitimate fear, or is that overwrought at this moment?

I think it is a legitimate fear. When something can mirror your skill set, not 100 percent of the way, but enough of the way that it could replace you, that’s an issue. Do these A.I. creators have any kind of moral responsibility to not create it because it could put people out of jobs? I think that’s a debate, but I don’t think they see it that way. They see it like they’re just creating the new generation of digital camera, the new generation of Photoshop. But I think it is worth worrying about because even compared with cameras and Photoshop, the A.I. is a little bit more of the full package and it is so accessible and so hard to match in terms. It’s really going to be up to human artists to find some way to differentiate themselves from the A.I.

This is making me wonder about the humans underneath the data sets that the A.I. is trained on. The criticism is, of course, that these businesses are making money off thousands of artists’ work without their consent or knowledge and it undermines their work. Some people looked at the Stable Diffusion and they didn’t have access to its whole data set, but they found that Thomas Kinkade, the landscape painter, was the most referenced artist in the data set. Is the A.I. just piggybacking? And if it’s not Thomas Kinkade, if it’s someone who’s alive, are they piggybacking on that person’s work without that person getting paid?

Here’s a bit more on the topic of money and art in a September 19, 2022 article by John Herrman for New York Magazine. First, he starts with the literary arts, Note: Links have been removed,

Artificial-intelligence experts are excited about the progress of the past few years. You can tell! They’ve been telling reporters things like “Everything’s in bloom,” “Billions of lives will be affected,” and “I know a person when I talk to it — it doesn’t matter whether they have a brain made of meat in their head.”

We don’t have to take their word for it, though. Recently, AI-powered tools have been making themselves known directly to the public, flooding our social feeds with bizarre and shocking and often very funny machine-generated content. OpenAI’s GPT-3 took simple text prompts — to write a news article about AI or to imagine a rose ceremony from The Bachelor in Middle English — and produced convincing results.

Deepfakes graduated from a looming threat to something an enterprising teenager can put together for a TikTok, and chatbots are occasionally sending their creators into crisis.

More widespread, and probably most evocative of a creative artificial intelligence, is the new crop of image-creation tools, including DALL-E, Imagen, Craiyon, and Midjourney, which all do versions of the same thing. You ask them to render something. Then, with models trained on vast sets of images gathered from around the web and elsewhere, they try — “Bart Simpson in the style of Soviet statuary”; “goldendoodle megafauna in the streets of Chelsea”; “a spaghetti dinner in hell”; “a logo for a carpet-cleaning company, blue and red, round”; “the meaning of life.”

This flood of machine-generated media has already altered the discourse around AI for the better, probably, though it couldn’t have been much worse. In contrast with the glib intra-VC debate about avoiding human enslavement by a future superintelligence, discussions about image-generation technology have been driven by users and artists and focus on labor, intellectual property, AI bias, and the ethics of artistic borrowing and reproduction [emphasis mine]. Early controversies have cut to the chase: Is the guy who entered generated art into a fine-art contest in Colorado (and won!) an asshole? Artists and designers who already feel underappreciated or exploited in their industries — from concept artists in gaming and film and TV to freelance logo designers — are understandably concerned about automation. Some art communities and marketplaces have banned AI-generated images entirely.

Requests are effectively thrown into “a giant swirling whirlpool” of “10,000 graphics cards,” Holz [David Holz, Midjourney founder] said, after which users gradually watch them take shape, gaining sharpness but also changing form as Midjourney refines its work.

This hints at an externality beyond the worlds of art and design. “Almost all the money goes to paying for those machines,” Holz said. New users are given a small number of free image generations before they’re cut off and asked to pay; each request initiates a massive computational task, which means using a lot of electricity.

High compute costs [emphasis mine] — which are largely energy costs — are why other services have been cautious about adding new users. …

Another Midjourney user, Gila von Meissner, is a graphic designer and children’s-book author-illustrator from “the boondocks in north Germany.” Her agent is currently shopping around a book that combines generated images with her own art and characters. Like Pluckebaum [Brian Pluckebaum who works in automotive-semiconductor marketing and designs board games], she brought up the balance of power with publishers. “Picture books pay peanuts,” she said. “Most illustrators struggle financially.” Why not make the work easier and faster? “It’s my character, my edits on the AI backgrounds, my voice, and my story.” A process that took months now takes a week, she said. “Does that make it less original?”

User MoeHong, a graphic designer and typographer for the state of California, has been using Midjourney to make what he called generic illustrations (“backgrounds, people at work, kids at school, etc.”) for government websites, pamphlets, and literature: “I get some of the benefits of using custom art — not that we have a budget for commissions! — without the paying-an-artist part.” He said he has mostly replaced stock art, but he’s not entirely comfortable with the situation. “I have a number of friends who are commercial illustrators, and I’ve been very careful not to show them what I’ve made,” he said. He’s convinced that tools like this could eventually put people in his trade out of work. “But I’m already in my 50s,” he said, “and I hope I’ll be gone by the time that happens.”

Fan club

The last article I’m featuring here is a September 15, 2021 piece by Agnieszka Cichocka for DailyArt, which provides good, brief descriptions of algorithms, generative creative networks, machine learning, artificial neural networks, and more. She is an enthusiast (Note: Links have been removed),

I keep wondering if Leonardo da Vinci, who, in my opinion, was the most forward thinking artist of all time, would have ever imagined that art would one day be created by AI. He worked on numerous ideas and was constantly experimenting, and, although some were failures, he persistently tried new products, helping to move our world forward. Without such people, progress would not be possible. 

Machine Learning

As humans, we learn by acquiring knowledge through observations, senses, experiences, etc. This is similar to computers. Machine learning is a process in which a computer system learns how to perform a task better in two ways—either through exposure to environments that provide punishments and rewards (reinforcement learning) or by training with specific data sets (the system learns automatically and improves from previous experiences). Both methods help the systems improve their accuracy. Machines then use patterns and attempt to make an accurate analysis of things they have not seen before. To give an example, let’s say we feed the computer with thousands of photos of a dog. Consequently, it can learn what a dog looks like based on those. Later, even when faced with a picture it has never seen before, it can tell that the photo shows a dog.

If you want to see some creative machine learning experiments in art, check out ML x ART. This is a website with hundreds of artworks created using AI tools.

Some thoughts

As the saying goes “a picture is worth a thousand words” and, now, It seems that pictures will be made from words or so suggests the example of Jason M. Allen feeding prompts to the AI system Midjourney.

I suspect (as others have suggested) that in the end, artists who use AI systems will be absorbed into the art world in much the same way as artists who use photography, or are considered performance artists and/or conceptual artists, and/or use video have been absorbed. There will be some displacements and discomfort as the questions I opened this posting with (Who is an artist? What is an artist? Can everyone be an artist?) are passionately discussed and considered. Underlying many of these questions is the issue of money.

The impact on people’s livelihoods is cheering or concerning depending on how the AI system is being used. Herrman’s September 19, 2022 article highlights two examples that focus on graphic designers. Gila von Meissner, the illustrator and designer, who uses her own art to illustrate her children’s books in a faster, more cost effective way with an AI system and MoeHong, a graphic designer for the state of California, who uses an AI system to make ‘customized generic art’ for which the state government doesn’t have to pay.

So far, the focus has been on Midjourney and other AI agents that have been created by developers for use by visual artists and writers. What happens when the visual artist or the writer is the developer? A September 12, 2022 article by Brandon Scott Roye for Cool Hunting approaches the question (Note: Links have been removed),

Mario Klingemann and Sasha Stiles on Semi-Autonomous AI Artists

An artist and engineer at the forefront of generating AI artwork, Mario Klingemann and first-generation Kalmyk-American poet, artist and researcher Sasha Stiles both approach AI from a more human, personal angle. Creators of semi-autonomous systems, both Klingemann and Stiles are the minds behind Botto and Technelegy, respectively. They are both artists in their own right, but their creations are too. Within web3, the identity of the “artist” who creates with visuals and the “writer” who creates with words is enjoying a foundational shift and expansion. Many have fashioned themselves a new title as “engineer.”

Based on their primary identities as an artist and poet, Klingemann and Stiles face the conundrum of becoming engineers who design the tools, rather than artists responsible for the final piece. They now have the ability to remove themselves from influencing inputs and outputs.

If you have time, I suggest reading Roye’s September 12, 2022 article as it provides some very interesting ideas although I don’t necessarily agree with them, e.g., “They now have the ability to remove themselves from influencing inputs and outputs.” Anyone who’s following the ethics discussion around AI knows that biases are built into the algorithms whether we like it or not. As for artists and writers calling themselves ‘engineers’, they may get a little resistance from the engineering community.

As users of open source software, Klingemann and Stiles should not have to worry too much about intellectual property. However, it seems copyright for the actual works and patents for the software could raise some interesting issues especially since money is involved.

In a March 10, 2022 article by Shraddha Nair for Stir World, Klingemann claims to have made over $1M from auctions of Botto’s artworks. it’s not clear to me where Botto obtains its library of images for future use (which may signal a potential problem); Stiles’ Technelegy creates poems from prompts using its library of her poems. (For the curious, I have an August 30, 2022 post “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” which explores some of the issues around patents.)

Who gets the patent and/or the copyright? Assuming you and I are employing machine learning to train our AI agents separately, could there be an argument that if my version of the AI is different than yours and proves more popular with other content creators/ artists that I should own/share the patent to the software and rights to whatever the software produces?

Getting back to Herrman’s comment about high compute costs and energy, we seem to have an insatiable appetite for energy and that is not only a high cost financially but also environmentally.

Botto exhibition

Here’s more about Klingemann’s artist exhibition by Botto (from an October 6, 2022 announcement received via email),

Mario Klingemann is a pioneering figurehead in the field of AI art,
working deep in the field of Machine Learning. Governed by a community
of 5,000 people, Klingemann developed Botto around an idea of creating
an autonomous entity that is able to be creative and co-creative.
Inspired by Goethe’s artificial man in Faust, Botto is a genderless AI
entity that is guided by an international community and art historical
trends. Botto creates 350 art pieces per week that are presented to its
community. Members of the community give feedback on these art fragments
by voting, expressing their individual preferences on what is
aesthetically pleasing to them. Then collectively the votes are used as
feedback for Botto’s generative algorithm, dictating what direction
Botto should take in its next series of art pieces.

The creative capacity of its algorithm is far beyond the capacities of
an individual to combine and find relationships within all the
information available to the AI. Botto faces similar issues as a human
artist, and it is programmed to self-reflect and ask, “I’ve created
this type of work before. What can I show them that’s different this
week?”

Once a week, Botto auctions the art fragment with the most votes on
SuperRare. All proceeds from the auction go back to the community. The
AI artist auctioned its first three pieces, Asymmetrical Liberation,
Scene Precede, and Trickery Contagion for more than $900,000 dollars,
the most successful AI artist premiere. Today, Botto has produced
upwards of 22 artworks and current sales have generated over $2 million
in total
[emphasis mine].

From March 2022 when Botto had made $1M to October 2022 where it’s made over $2M. It seems Botto is a very financially successful artist.

Botto: A Whole Year of Co-Creation

This exhibition (October 26 – 30, 2022) is being held in London, England at this location:

The Department Store, Brixton 248 Ferndale Road London SW9 8FR United Kingdom

Enjoy!

When poetry feels like colour, posture or birdsong plus some particle fiction

A June 10, 2022 Tallinn University (Estonia) press release (also on EurekAlert but published on June 16, 2022 on behalf of the Estonian Research Council) provides information on a fascinating PhD thesis examining poetry in a very new way,

In addition to searching for the meaning of poems, they can also often be described through the emotions that the reader feels while reading them. Kristiine Kikas, a doctoral student at the School of Humanities of Tallinn University, studied which other sensations arise whilst reading poetry and how they affect the understanding of poems.

The aim of the doctoral thesis was to study the palpability of language [emphasis mine], i.e. sensory saturation, which has not found sufficient analysis and application so far. “In my research, I see reading as an impersonal process, meaning the sensations that arise do not seem to belong to either the reader or the poetry, but to both at the same time,” Kikas describes the perspective of her thesis.

In general, the language of poetry is studied metaphorically, in order to try to understand what a word means either directly or figuratively. A different perspective called “affective perspective” usually studies the effects of pre-linguistic impulses or impulses not related to the meaning of the word on the reader. However, Kikas viewed language as a simultaneous proposition and flow of consciousness, i.e. a discussion moving from one statement to another as well as connections that seem to occur intuitively while reading. She sought to identify ways to approach verbal language, that is considered to trigger analytical thinking in particular, in a way that would help open up sensory saturation and put their observation in poetic analysis at the forefront along with other modes of studying poetry. To achieve her goals, Kikas applied Gilles Deleuze’s method of radical empiricism and compared several other approaches with it: semiotics, biology, anthropology, modern psychoanalysis and cognitive sciences. [emphases mine]

Kikas describes reading in her doctoral thesis as a constant presence in verbal language, which is sometimes more and sometimes less pronounced. This type of presence can be felt like colour, posture or birdsong [emphasis mine]. “Following the neuroscientific origins of metaphors, I used the human organism’s tendency to perceive language at the sensory-motor level in my close reading to help replay it using body memory. This trait allows us to physically experience the words we read,” explains Kikas. According to her, the sensations stored in the body evoked by words can be considered the oneness of the reader and the words, or the reader’s becoming the words. Kikas emphasises that this can only happen if the multiplicity of sensations and meanings that arise during reading are recognised.

“Although the study showed that the saturations associated with verbal language cannot be linked to a broader literary discourse without representational and analytical thinking, the conclusion is that noticing and acknowledging them is important in both experiencing and interpreting the poem,” summarises Kikas her doctoral thesis. As her research was only the first attempt in examining sensations in poetry, Kikas hopes to provide material for further discussion. Above all, she encourages readers in their attempts to understand poetry to notice and trust even the slightest sensations and impulses triggered while reading, as these are the beginning of even the most abstract meaning.

I was able to track down the thesis ‘Uncommonness in the Commonplace: Reading for Senseation in Poetry‘ to here where the title is in English but the rest of the entry is in Estonian. Unfortunately, it’s not possible to download the thesis, which I believe is written in English.

Particle fiction

This is a somewhat older thesis and is only loosely related in that it is about literary matters and there’s a science aspect to it too. Tania Hershman, “poet, writer, teacher and editor based in Manchester, UK,” adds this from the about page on her eponymous website, Note: I have moved the paragraphs into a different order,

… After making a living for 13 years as a science journalist, writing for publications such as WIRED and NewScientist, I gave it all up to write fiction, later also poetry and hybrid pieces, and am now based in Manchester in the north of England. I have a first degree in Maths and Physics, a diploma in journalism, an MSc in Philosophy of Science, an MA and a PhD in Creative Writing.

My hybrid book, And What If We Were All Allowed to Disappear, was published in a limited edition by Guillemot Press in March 2020. It is now sold out but can be read in electronic form as part of my PhD in Creative Writing, ‘Particle fictions: an experimental approach to creative writing and reading informed by particle physics’, available to be downloaded from Bath Spa University here: http://researchspace.bathspa.ac.uk/10693/.

You can download her PhD thesis (Particle fictions: an experimental approach to creative writing and reading informed by particle physics). This abstract offers a few highlights,

This two-part document comprises the work submitted for Tania Hershman’s practice-based PhD in Creative Writing in answer to her primary research question: Can particle fiction and particle physics interrogate each other? Her secondary research question examined the larger question of wholeness and wholes versus parts. The first of the two elements of the PhD is a book-length creative work of what Hershman has defined as “particle fiction” – a book made of parts which works as a whole – entitled ‘And What If We Were All Allowed to Disappear’: an experimental, hybrid work comprised of prose, poetry, elements that morph between the two forms, and images, and takes concepts from particle physics as inspiration. The second element of this PhD, the contextualising research, entitled ‘And What If We Were All Allowed To Separate And Come Together’, which is written in the style of fictocriticism, provides an overview of particle physics and the many other topics relating to wholeness and wholes versus parts – from philosophy to postmodernism and archaeology – that Hershman investigated in the course of her project. This essay also details the “experiments” Hershman carried out on works which she defined as particle fictions, in order to examine whether it was possible to generalise and formulate a “Standard Model of Particle Fiction” inspired by a the Standard Model of Particle Physics, and to inform the creation of her own work of particle fiction.

Enjoy!

Beer and wine reviews, the American Chemical Society’s (ACS) AI editors, and the Turing Test

The Turing test first known as the ‘Imitation Game’, was designed by scientist Alan Turing in 1950 to see if a machine’s behaviour (in this case, a ‘conversation’) could fool someone into believing it was human. It’s a basic test to help determine true artificial intelligence.

These days ‘artificial intelligence’ seems to be everywhere, although I’m not sure that all these algorithms would pass the Turing test. Some of the latest material I’ve seen suggests that writers and editors may have to rethink their roles in future. Let’s start with the beer and wine reviews.

Writing

An April 25, 2022 Dartmouth College news release by David Hirsch announces the AI reviewer, Note: Links have been removed,

In mid-2020, the computer science team of Keith Carlson, Allen Riddell and Dan Rockmore was stuck on a problem. It wasn’t a technical challenge. The computer code they had developed to write product reviews was working beautifully. But they were struggling with a practical question.

“Getting the code to write reviews was only the first part of the challenge,” says Carlson, Guarini ’21, a doctoral research fellow at the Tuck School of Business, “The remaining challenge was figuring out how and where it could be used.”

The original study took on two challenges: to design code that could write original, human-quality product reviews using a small set of product features and to see if the algorithm could be adapted to write “synthesis reviews” for products from a large number of existing reviews.

Review writing can be challenging because of the overwhelming number of products available. The team wanted to see if artificial intelligence was up to the task of writing opinionated text about vast product classes.

They focused on wine and beer reviews because of the extensive availability of material to train the algorithm. The relatively narrow vocabularies used to describe the products also makes it open to the techniques of AI systems and natural language processing tools.

The project was kickstarted by Riddell, a former fellow at the Neukom Institute for Computational Science, and developed with Carlson under the guidance of Rockmore, the William H. Neukom 1964 Distinguished Professor of Computational Science.

The code couldn’t taste the products, but it did ingest reams of written material. After training the algorithm on hundreds of thousands of published wine and beer reviews, the team found that the code could complete both tasks.

One result read: “This is a sound Cabernet. It’s very dry and a little thin in blackberry fruit, which accentuates the acidity and tannins. Drink up.”

Another read: “Pretty dark for a rosé, and full-bodied, with cherry, raspberry, vanilla and spice flavors. It’s dry with good acidity.”

“But now what?” Carlson explains as a question that often gnaws at scientists. The team wondered, “Who else would care?”

“I didn’t want to quit there,” says Rockmore. “I was sure that this work could be interesting to a wider audience.”

Sensing that the paper could have relevance in marketing, the team walked the study to Tuck Drive to see what others would think.

“Brilliant,” Praveen Kopalle, the Signal Companies’ Professor of Management at Tuck School of Business, recalls thinking when first reviewing the technical study.

Kopalle knew that the research was important. It could even “disrupt” the online review industry, a huge marketplace of goods and services.

“The paper has a lot of marketing applications, particularly in the context of online reviews where we can create reviews or descriptions of products when they may not already exist,” adds Kopalle. “In fact, we can even think about summarizing reviews for products and services as well.”

With the addition of Prasad Vana, assistant professor of business administration at Tuck, the team was complete. Vana reframed the technical feat of creating review-writing code into that of a market-friendly tool that can assist consumers, marketers, and professional reviewers.

“This is a sound Cabernet. It’s very dry and a little thin in blackberry fruit, which accentuates the acidity and tannins. Drink up.” Attribution: Artificial Intelligence review from Dartmouth project

The resulting research, published in International Journal of Research in Marketing, surveyed independent participants to confirm that the AI system wrote human-like reviews in both challenges.

“Using artificial intelligence to write and synthesize reviews can create efficiencies on both sides of the marketplace,” said Vana. “The hope is that AI can benefit reviewers facing larger writing workloads and consumers who have to sort through so much content about products.”

The paper also dwells on the ethical concerns raised by computer-generated content. It notes that marketers could get better acceptance by falsely attributing the reviews to humans. To address this, the team advocates for transparency when computer-generated text is used.

They also address the issue of computers taking human jobs. Code should not replace professional product reviewers, the team insists in the paper. The technology is meant to make the tasks of producing and reading the material more efficient. [emphasis mine]

“It’s interesting to imagine how this could benefit restaurants that cannot afford sommeliers or independent sellers on online platforms who may sell hundreds of products,” says Vana.

According to Carlson, the paper’s first author, the project demonstrates the potential of AI, the power of innovative thinking, and the promise of cross-campus collaboration.

“It was wonderful to work with colleagues with different expertise to take a theoretical idea and bring it closer to the marketplace,” says Carlson. “Together we showed how our work could change marketing and how people could use it. That could only happen with collaboration.”

A revised April 29, 2022 version was published on EurekAlert and some of the differences are interesting (to me, if no one else). As you see, there’s a less ‘friendly’ style and the ‘jobs’ issue has been approached differently. Note: Links have been removed,

Artificial intelligence systems can be trained to write human-like product reviews that assist consumers, marketers and professional reviewers, according to a study from Dartmouth College, Dartmouth’s Tuck School of Business, and Indiana University.

The research, published in the International Journal of Research in Marketing, also identifies ethical challenges raised by the use of the computer-generated content.

“Review writing is challenging for humans and computers, in part, because of the overwhelming number of distinct products,” said Keith Carlson, a doctoral research fellow at the Tuck School of Business. “We wanted to see how artificial intelligence can be used to help people that produce and use these reviews.”

For the research, the Dartmouth team set two challenges. The first was to determine whether a machine can be taught to write original, human-quality reviews using only a small number of product features after being trained on a set of existing content. Secondly, the team set out to see if machine learning algorithms can be used to write syntheses of reviews of products for which many reviews already exist.

“Using artificial intelligence to write and synthesize reviews can create efficiencies on both sides of the marketplace,” said Prasad Vana, assistant professor of business administration at Tuck School of Business. “The hope is that AI can benefit reviewers facing larger writing workloads and consumers that have to sort through so much content about products.”

The researchers focused on wine and beer reviews because of the extensive availability of material to train the computer algorithms. Write-ups of these products also feature relatively focused vocabularies, an advantage when working with AI systems.

To determine whether a machine could write useful reviews from scratch, the researchers trained an algorithm on about 180,000 existing wine reviews. Metadata tags for factors such as product origin, grape variety, rating, and price were also used to train the machine-learning system.

When comparing the machine-generated reviews against human reviews for the same wines, the research team found agreement between the two versions. The results remained consistent even as the team challenged the algorithms by changing the amount of input data that was available for reference.

The machine-written material was then assessed by non-expert study participants to test if they could determine whether the reviews were written by humans or a machine. According to the research paper, the participants were unable to distinguish between the human and AI-generated reviews with any statistical significance. Furthermore, their intent to purchase a wine was similar across human versus machine generated reviews of the wine. 

Having found that artificial intelligence can write credible wine reviews, the research team turned to beer reviews to determine the effectiveness of using AI to write “review syntheses.” Rather than being trained to write new reviews, the algorithm was tasked with aggregating elements from existing reviews of the same product. This tested AI’s ability to identify and provide limited but relevant information about products based on a large volume of varying opinions.

“Writing an original review tests the computer’s expressive ability based on a relatively narrow set of data. Writing a synthesis review is a related but distinct task where the system is expected to produce a review that captures some of the key ideas present in an existing set of reviews for a product,” said Carlson, who conducted the research while a PhD candidate in computer science at Dartmouth.

To test the algorithm’s ability to write review syntheses, researchers trained it on 143,000 existing reviews of over 14,000 beers. As with the wine dataset, the text of each review was paired with metadata including the product name, alcohol content, style, and scores given by the original reviewers.

As with the wine reviews, the research used independent study participants to judge whether the machine-written summaries captured and summarized the opinions of numerous reviews in a useful, human-like manner.

According to the paper, the model was successful at taking the reviews of a product as input and generating a synthesis review for that product as output.

“Our modeling framework could be useful in any situation where detailed attributes of a product are available and a written summary of the product is required,” said Vana. “It’s interesting to imagine how this could benefit restaurants that cannot afford sommeliers or independent sellers on online platforms who may sell hundreds of products.”

Both challenges used a deep learning neural net based on transformer architecture to ingest, process and output review language.

According to the research team, the computer systems are not intended to replace professional writers and marketers, but rather to assist them in their work. A machine-written review, for instance, could serve as a time-saving first draft of a review that a human reviewer could then revise. [emphasis mine]

The research can also help consumers. Syntheses reviews—like those on beer in the study—can be expanded to the constellation of products and services in online marketplaces to assist people who have limited time to read through many product reviews.

In addition to the benefits of machine-written reviews, the research team highlights some of the ethical challenges presented by using computer algorithms to influence human consumer behavior.

Noting that marketers could get better acceptance of machine-generated reviews by falsely attributing them to humans, the team advocates for transparency when computer-generated reviews are offered.

“As with other technology, we have to be cautious about how this advancement is used,” said Carlson. “If used responsibly, AI-generated reviews can be both a productivity tool and can support the availability of useful consumer information.”

Researchers contributing to the study include Praveen Kopalle, Dartmouth’s Tuck School of Business; Allen Riddell, Indiana University, and Daniel Rockmore, Dartmouth College.

I wonder if the second news release was written by an AI agent.

Here’s a link to and a citation for the paper,

Complementing human effort in online reviews: A deep learning approach to automatic content generation and review synthesis by Keith Carlson, Praveen K.Kopal, Allen Ridd, Daniel Rockmore, Prasad Vana. International Journal of Research in Marketing DOI: https://doi.org/10.1016/j.ijresmar.2022.02.004 Available online 12 February 2022 In Press, Corrected Proof

This paper is behind a paywall.

Daniel (Dan) Rockmore was mentioned here in a May 6, 2016 posting about a competition he’d set up through Dartmouth College,’s Neukom Institute. The competition, which doesn’t seem to have been run since 2018, was called Turing Tests in Creative Arts.

Editing

It seems the American Chemical Society (ACS) has decided to further automate some of its editing. From an April 28, 2022 Digital Science business announcement (also on EurekAlert) by David Ellis,

Writefull’s world-leading AI-based language services have been integrated into the American Chemical Society’s (ACS) Publications workflow.

In a partnership that began almost two years ago, ACS has now progressed to a full integration of Writefull’s application programming interfaces (APIs) for three key uses.

One of the world’s largest scientific societies, ACS publishes more than 300,000 research manuscripts in more than 60 scholarly journals per year.

Writefull’s proprietary AI technology is trained on millions of scientific papers using Deep Learning. It identifies potential language issues with written texts, offers solutions to those issues, and automatically assesses texts’ language quality. Thanks to Writefull’s APIs, its tech can be applied at all key points in the editorial workflows.

Writefull’s Manuscript Categorization API is now used by ACS before copyediting to automatically classify all accepted manuscripts by their language quality. Using ACS’s own classification criteria, the API assigns a level-of-edit grade to manuscripts at scale without editors having to open documents and review the text. After thorough benchmarking alongside human editors, Writefull reached more than 95% alignment in grading texts, significantly reducing the time ACS spends on manuscript evaluation.

The same Manuscript Categorization API is now part of ACS’s quality control program, to evaluate the language in manuscripts after copyediting.

Writefull’s Metadata API is also being used to automate aspects of manuscript review, ensuring that all elements of an article are complete prior to publication. The same API is used by Open Access publisher Hindawi as a pre-submission structural checks tool for authors.

Juan Castro, co-founder and CEO of Writefull, says: “Our partnership with the American Chemical Society over the past two years has been aimed at thoroughly vetting and shaping our services to meet ACS’s needs. Writefull’s AI-based language services empower publishers to increase their workflow efficiency and positively impact production costs, while also maintaining the quality and integrity of the manuscript.”

Digital Science is a technology company working to make research more efficient. We invest in, nurture and support innovative businesses and technologies that make all parts of the research process more open and effective. Our portfolio includes admired brands including Altmetric, Dimensions, Figshare, ReadCube, Symplectic, IFI CLAIMS, GRID, Overleaf, Ripeta and Writefull. We believe that together, we can help researchers make a difference. Visit www.digital-science.com and follow @digitalsci on Twitter.

Writefull is a technology startup that creates tools to help researchers improve their writing in English. The first version of the Writefull product allowed researchers to discover patterns in academic language, such as frequent word combinations and synonyms in context. The new version utilises Natural Language Processing and Deep Learning algorithms that will give researchers feedback on their full texts. Visit writefull.com and follow @writefullapp on Twitter.

The American Chemical Society (ACS) is a nonprofit organization chartered by the U.S. Congress. ACS’ mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and all its people. The Society is a global leader in promoting excellence in science education and providing access to chemistry-related information and research through its multiple research solutions, peer-reviewed journals, scientific conferences, eBooks and weekly news periodical Chemical & Engineering News. ACS journals are among the most cited, most trusted and most read within the scientific literature; however, ACS itself does not conduct chemical research. As a leader in scientific information solutions, its CAS division partners with global innovators to accelerate breakthroughs by curating, connecting and analyzing the world’s scientific knowledge. ACS’ main offices are in Washington, D.C., and Columbus, Ohio. Visit www.acs.org and follow @AmerChemSociety on Twitter.

So what?

An artificial intelligence (AI) agent being used for writing assignments is not new (see my July 16, 2014 posting titled, “Writing and AI or is a robot writing this blog?“). The argument that these agents will assist rather than replace (pick an occupation: e.g., writers, doctors, programmers, scientists, etc) is almost always used as scientists explain that AI agents will take over the boring work giving you (the human) more opportunities to do interesting work. The AI-written beer and wine reviews described here support at least part of the argument—for the time being.

It’s true that an AI agent can’t taste beer or wine but that can change as this August 8, 2019 article by Alice Johnston for CNN hints (Note: Links have been removed),

An artificial “tongue” that can taste minute differences between varieties of Scotch whisky could be the key to identifying counterfeit alcohol, scientists say.

Engineers from the universities of Glasgow and Strathclyde in Scotland created a device made of gold and aluminum and measured how it absorbed light when submerged in different kinds of whisky.

Analysis of the results allowed the scientists to identify the samples from Glenfiddich, Glen Marnoch and Laphroaig with more than 99% accuracy

BTW, my earliest piece on artificial tongues is a July 28, 2011 posting, “Bio-inspired electronic tongue replaces sommelier?,” about research in Spain.

For a contrast, this is the first time I can recall seeing anything about an artificial intelligence agent that edits and Writefall’s use at the ACS falls into the ‘doing all the boring work’ category and narrative quite neatly.

Having looked at a definition of the various forms of editing and core skills, I”m guessing that AI will take over every aspect (from the Editors’ Association of Canada, Definitions of Editorial Skills webpage),

CORE SKILLS

Structural Editing

Assessing and shaping draft material to improve its organization and content. Changes may be suggested to or drafted for the writer. Structural editing may include:

revising, reordering, cutting, or expanding material

writing original material

determining whether permissions are necessary for third-party material

recasting material that would be better presented in another form, or revising material for a different medium (such as revising print copy for web copy)

clarifying plot, characterization, or thematic elements

Also known as substantive editing, manuscript editing, content editing, or developmental editing.

Stylistic Editing

Editing to clarify meaning, ensure coherence and flow, and refine the language. It includes:

eliminating jargon, clichés, and euphemisms

establishing or maintaining the language level appropriate for the intended audience, medium, and purpose

adjusting the length and structure of sentences and paragraphs

establishing or maintaining tone, mood, style, and authorial voice or level of formality

Also known as line editing (which may also include copy editing).

Copy Editing

Editing to ensure correctness, accuracy, consistency, and completeness. It includes:

editing for grammar, spelling, punctuation, and usage

checking for consistency and continuity of mechanics and facts, including anachronisms, character names, and relationships

editing tables, figures, and lists

notifying designers of any unusual production requirements

developing a style sheet or following one that is provided

correcting or querying general information that should be checked for accuracy 

It may also include:

marking levels of headings and the approximate placement of art

Canadianizing or other localizing

converting measurements

providing or changing the system of citations

editing indexes

obtaining or listing permissions needed

checking front matter, back matter, and cover copy

checking web links

Note that “copy editing” is often loosely used to include stylistic editing, structural editing, fact checking, or proofreading. Editors Canada uses it only as defined above.

Proofreading

Examining material after layout or in its final format to correct errors in textual and visual elements. The material may be read in isolation or against a previous version. It includes checking for:

adherence to design

minor mechanical errors (such as spelling mistakes or deviations from style sheet)

consistency and accuracy of elements in the material (such as cross-references, running heads, captions, web page heading tags, hyperlinks, and metadata)

It may also include:

distinguishing between printer’s, designer’s, or programmer’s errors and writer’s or editor’s alterations

copyfitting

flagging or checking locations of art

inserting page numbers or checking them against content and page references

Note that proofreading is checking a work after editing; it is not a substitute for editing.

I’m just as happy to get rid of ‘boring’ parts of my work as anyone else but that’s how I learned in the first place and I haven’t seen any discussion about the importance of boring, repetitive tasks for learning.

East/West collaboration on scholarship and imagination about humanity’s long-term future— six new fellows at Berggruen Research Center at Peking University

According to a January 4, 2022 Berggruen Institute (also received via email), they have appointed a new crop of fellows for their research center at Peking University,

The Berggruen Institute has announced six scientists and philosophers to serve as Fellows at the Berggruen Research Center at Peking University in Beijing, China. These eminent scholars will work together across disciplines to explore how the great transformations of our time may shift human experience and self-understanding in the decades and centuries to come.

The new Fellows are Chenjian Li, University Chair Professor at Peking University; Xianglong Zhang, professor of philosophy at Peking University; Xiaoli Liu, professor of philosophy at Renmin University of China; Jianqiao Ge, lecturer at the Academy for Advanced Interdisciplinary Studies (AAIS) at Peking University; Xiaoping Chen, Director of the Robotics Laboratory at the University of Science and Technology of China; and Haidan Chen, associate professor of medical ethics and law at the School of Health Humanities at Peking University.

“Amid the pandemic, climate change, and the rest of the severe challenges of today, our Fellows are surmounting linguistic and cultural barriers to imagine positive futures for all people,” said Bing Song, Director of the China Center and Vice President of the Berggruen Institute. “Dialogue and shared understanding are crucial if we are to understand what today’s breakthroughs in science and technology really mean for the human community and the planet we all share.”

The Fellows will investigate deep questions raised by new understandings and capabilities in science and technology, exploring their implications for philosophy and other areas of study.  Chenjian Li is considering the philosophical and ethical considerations of gene editing technology. Meanwhile, Haidan Chen is exploring the social implications of brain/computer interface technologies in China, while Xiaoli Liu is studying philosophical issues arising from the intersections among psychology, neuroscience, artificial intelligence, and art.

Jianqiao Ge’s project considers the impact of artificial intelligence on the human brain, given the relative recency of its evolution into current form. Xianglong Zhang’s work explores the interplay between literary culture and the development of technology. Finally, Xiaoping Chen is developing a new concept for describing innovation that draws from Daoist, Confucianist, and ancient Greek philosophical traditions.

Fellows at the China Center meet monthly with the Institute’s Los Angeles-based Fellows. These fora provide an opportunity for all Fellows to share and discuss their work. Through this cross-cultural dialogue, the Institute is helping to ensure continued high-level of ideas among China, the United States, and the rest of the world about some of the deepest and most fundamental questions humanity faces today.

“Changes in our capability and understanding of the physical world affect all of humanity, and questions about their implications must be pondered at a cross-cultural level,” said Bing. “Through multidisciplinary dialogue that crosses the gulf between East and West, our Fellows are pioneering new thought about what it means to be human.”

Haidan Chen is associate professor of medical ethics and law at the School of Health Humanities at Peking University. She was a visiting postgraduate researcher at the Institute for the Study of Science Technology and Innovation (ISSTI), the University of Edinburgh; a visiting scholar at the Brocher Foundation, Switzerland; and a Fulbright visiting scholar at the Center for Biomedical Ethics, Stanford University. Her research interests embrace the ethical, legal, and social implications (ELSI) of genetics and genomics, and the governance of emerging technologies, in particular stem cells, biobanks, precision medicine, and brain science. Her publications appear at Social Science & MedicineBioethics and other journals.

Xiaoping Chen is the director of the Robotics Laboratory at University of Science and Technology of China. He also currently serves as the director of the Robot Technical Standard Innovation Base, an executive member of the Global AI Council, Chair of the Chinese RoboCup Committee, and a member of the International RoboCup Federation’s Board of Trustees. He has received the USTC’s Distinguished Research Presidential Award and won Best Paper at IEEE ROBIO 2016. His projects have won the IJCAI’s Best Autonomous Robot and Best General-Purpose Robot awards as well as twelve world champions at RoboCup. He proposed an intelligent technology pathway for robots based on Open Knowledge and the Rong-Cha principle, which have been implemented and tested in the long-term research on KeJia and JiaJia intelligent robot systems.

Jianqiao Ge is a lecturer at the Academy for Advanced Interdisciplinary Studies (AAIS) at Peking University. Before, she was a postdoctoral fellow at the University of Chicago and the Principal Investigator / Co-Investigator of more than 10 research grants supported by the Ministry of Science and Technology of China, the National Natural Science Foundation of China, and Beijing Municipal Science & Technology Commission. She has published more than 20 peer-reviewed articles on leading academic journals such as PNAS, the Journal of Neuroscience, and has been awarded two national patents. In 2008, by scanning the human brain with functional MRI, Ge and her collaborator were among the first to confirm that the human brain engages distinct neurocognitive strategies to comprehend human intelligence and artificial intelligence. Ge received her Ph.D. in psychology, B.S in physics, a double B.S in mathematics and applied mathematics, and a double B.S in economics from Peking University.

Chenjian Li is the University Chair Professor of Peking University. He also serves on the China Advisory Board of Eli Lilly and Company, the China Advisory Board of Cornell University, and the Rhodes Scholar Selection Committee. He is an alumnus of Peking University’s Biology Department, Peking Union Medical College, and Purdue University. He was the former Vice Provost of Peking University, Executive Dean of Yuanpei College, and Associate Dean of the School of Life Sciences at Peking University. Prior to his return to China, he was an associate professor at Weill Medical College of Cornell University and the Aidekman Endowed Chair of Neurology at Mount Sinai School of Medicine. Dr. Li’s academic research focuses on the molecular and cellular mechanisms of neurological diseases, cancer drug development, and gene-editing and its philosophical and ethical considerations. Li also writes as a public intellectual on science and humanity, and his Chinese translation of Richard Feynman’s book What Do You Care What Other People Think? received the 2001 National Publisher’s Book Award.

Xiaoli Liu is professor of philosophy at Renmin University. She is also Director of the Chinese Society of Philosophy of Science Leader. Her primary research interests are philosophy of mathematics, philosophy of science and philosophy of cognitive science. Her main works are “Life of Reason: A Study of Gödel’s Thought,” “Challenges of Cognitive Science to Contemporary Philosophy,” “Philosophical Issues in the Frontiers of Cognitive Science.” She edited “Symphony of Mind and Machine” and series of books “Mind and Cognition.” In 2003, she co-founded the “Mind and Machine workshop” with interdisciplinary scholars, which has held 18 consecutive annual meetings. Liu received her Ph.D. from Peking University and was a senior visiting scholar in Harvard University.

Xianglong Zhang is a professor of philosophy at Peking University. His research areas include Confucian philosophy, phenomenology, Western and Eastern comparative philosophy. His major works (in Chinese except where noted) include: Heidegger’s Thought and Chinese Tao of HeavenBiography of HeideggerFrom Phenomenology to ConfuciusThe Exposition and Comments of Contemporary Western Philosophy; The Exposition and Comments of Classic Western PhilosophyThinking to Take Refuge: The Chinese Ancient Philosophies in the GlobalizationLectures on the History of Confucian Philosophy (four volumes); German Philosophy, German Culture and Chinese Philosophical ThinkingHome and Filial Piety: From the View between the Chinese and the Western.

About the Berggruen China Center
Breakthroughs in artificial intelligence and life science have led to the fourth scientific and technological revolution. The Berggruen China Center is a hub for East-West research and dialogue dedicated to the cross-cultural and interdisciplinary study of the transformations affecting humanity. Intellectual themes for research programs are focused on frontier sciences, technologies, and philosophy, as well as issues involving digital governance and globalization.

About the Berggruen Institute:
The Berggruen Institute’s mission is to develop foundational ideas and shape political, economic, and social institutions for the 21st century. Providing critical analysis using an outwardly expansive and purposeful network, we bring together some of the best minds and most authoritative voices from across cultural and political boundaries to explore fundamental questions of our time. Our objective is enduring impact on the progress and direction of societies around the world. To date, projects inaugurated at the Berggruen Institute have helped develop a youth jobs plan for Europe, fostered a more open and constructive dialogue between Chinese leadership and the West, strengthened the ballot initiative process in California, and launched Noema, a new publication that brings thought leaders from around the world together to share ideas. In addition, the Berggruen Prize, a $1 million award, is conferred annually by an independent jury to a thinker whose ideas are shaping human self-understanding to advance humankind.

You can find out more about the Berggruen China Center here and you can access a list along with biographies of all the Berggruen Institute fellows here.

Getting ready

I look forward to hearing about the projects from these thinkers.

Gene editing and ethics

I may have to reread some books in anticipation of Chenjian Li’s philosophical work and ethical considerations of gene editing technology. I wonder if there’ll be any reference to the He Jiankui affair.

(Briefly for those who may not be familiar with the situation, He claimed to be the first to gene edit babies. In November 2018, news about the twins, Lulu and Nana, was a sensation and He was roundly criticized for his work. I have not seen any information about how many babies were gene edited for He’s research; there could be as many as six. My July 28, 2020 posting provided an update. I haven’t stumbled across anything substantive since then.)

There are two books I recommend should you be interested in gene editing, as told through the lens of the He Jiankui affair. If you can, read both as that will give you a more complete picture.

In no particular order: This book provides an extensive and accessible look at the science, the politics of scientific research, and some of the pressures on scientists of all countries. Kevin Davies’ 2020 book, “Editing Humanity; the CRISPR Revolution and the New Era of Genome Editing” provides an excellent introduction from an insider. Here’s more from Davies’ biographical sketch,

Kevin Davies is the executive editor of The CRISPR Journal and the founding editor of Nature Genetics . He holds an MA in biochemistry from the University of Oxford and a PhD in molecular genetics from the University of London. He is the author of Cracking the Genome, The $1,000 Genome, and co-authored a new edition of DNA: The Story of the Genetic Revolution with Nobel Laureate James D. Watson and Andrew Berry. …

The other book is “The Mutant Project; Inside the Global Race to Genetically Modify Humans” (2020) by Eben Kirksey, an anthropologist who has an undergraduate degree in one of the sciences. He too provides scientific underpinning but his focus is on the cultural and personal underpinnings of the He Jiankui affair, on the culture of science research, irrespective of where it’s practiced, and the culture associated with the DIY (do-it-yourself) Biology community. Here’s more from Kirksey’s biographical sketch,

EBEN KIRKSEY is an American anthropologist and Member of the Institute for Advanced Study in Princeton, New Jersey. He has been published in Wired, The Atlantic, The Guardian and The Sunday Times . He is sought out as an expert on science in society by the Associated Press, The Wall Street Journal, The New York Times, Democracy Now, Time and the BBC, among other media outlets. He speaks widely at the world’s leading academic institutions including Oxford, Yale, Columbia, UCLA, and the International Summit of Human Genome Editing, plus music festivals, art exhibits, and community events. Professor Kirksey holds a long-term position at Deakin University in Melbourne, Australia.

Brain/computer interfaces (BCI)

I’m happy to see that Haidan Chen will be exploring the social implications of brain/computer interface technologies in China. I haven’t seen much being done here in Canada but my December 23, 2021 posting, Your cyborg future (brain-computer interface) is closer than you think, highlights work being done at the Imperial College London (ICL),

“For some of these patients, these devices become such an integrated part of themselves that they refuse to have them removed at the end of the clinical trial,” said Rylie Green, one of the authors. “It has become increasingly evident that neurotechnologies have the potential to profoundly shape our own human experience and sense of self.”

You might also find my September 17, 2020 posting has some useful information. Check under the “Brain-computer interfaces, symbiosis, and ethical issues” subhead for another story about attachment to one’s brain implant and also the “Finally” subhead for more reading suggestions.

Artificial intelligence (AI), art, and the brain

I’ve lumped together three of the thinkers, Xiaoli Liu, Jianqiao Ge and Xianglong Zhang, as there is some overlap (in my mind, if nowhere else),

  • Liu’s work on philosophical issues as seen in the intersections of psychology, neuroscience, artificial intelligence, and art
  • Ge’s work on the evolution of the brain and the impact that artificial intelligence may have on it
  • Zhang’s work on the relationship between literary culture and the development of technology

A December 3, 2021 posting, True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read), is both a review of a recent episode of the Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, and a dive into a number of issues as can be seen under subheads such as “AI and Creativity,” “Kazuo Ishiguro?” and “Evolution.”

You may also want to check out my December 27, 2021 posting, Ai-Da (robot artist) writes and performs poem honouring Dante’s 700th anniversary, for an eye opening experience. If nothing else, just watch the embedded video.

This suggestion relates most closely to Ge’s and Zhang’s work. If you haven’t already come across it, there’s Walter J. Ong’s 1982 book, “Orality and Literacy: The Technologizing of the Word.” From the introductory page of the 2002 edition (PDF),

This classic work explores the vast differences between oral and
literate cultures and offers a brilliantly lucid account of the
intellectual, literary and social effects of writing, print and
electronic technology. In the course of his study, Walter J.Ong
offers fascinating insights into oral genres across the globe and
through time and examines the rise of abstract philosophical and
scientific thinking. He considers the impact of orality-literacy
studies not only on literary criticism and theory but on our very
understanding of what it is to be a human being, conscious of self
and other.

In 2013, a 30th anniversary edition of the book was released and is still in print.

Philosophical traditions

I’m very excited to learn more about Xiaoping Chen’s work describing innovation that draws from Daoist, Confucianist, and ancient Greek philosophical traditions.

Should any of my readers have suggestions for introductory readings on these philosophical traditions, please do use the Comments option for this blog. In fact, if you have suggestions for other readings on these topics, I would be very happy to learn of them.

Congratulations to the six Fellows at the Berggruen Research Center at Peking University in Beijing, China. I look forward to reading articles about your work in the Berggruen Institute’s Noema magazine and, possibly, attending your online events.

Science and stories: an online talk January 5, 2022 and a course starting on January 10, 2022

So far this year all I’ve been posting about are events and contests. Continuing on that theme, I have an event and, something new, a course.

Massey Dialogues on January 5, 2022, 1 – 2 pm PST

“The Art of Science-Telling: How Science Education Can Shape Society” is scheduled for today (Wednesday, January 5, 5022 at 1 pm PST or 4 pm EST), You can find the livestream here on YouTube,

Massey College

Join us for the first Massey Dialogues of 2022 from 4:00-5:00pm ET on the Art of Science-Telling: How Science Education Can Shape Society.

Farah Qaiser (Evidence for Democracy), Dr. Bonnie Schmidt (Let’s Talk Science) and Carolyn Tuohy (Senior Fellow) will discuss what nonprofits can do for science education and policy, moderated by Junior Fellow Keshna Sood.

The Dialogues are open to the public – we invite everyone to join and take part in what will be a very informative online discussion. Participants are invited to submit questions to the speakers in real time via the Chat function to the right of the screen.

——-

To ensure you never miss a Massey Event, subscribe to our YouTube channel: https://www.youtube.com/user/masseyco…

We also invite you to visit masseycollege.ca/calendar for upcoming events.

Follow us on social media:

twitter.com/masseycollege
instagram.com/massey_college
linkedin.com/school/massey-college
facebook.com/MasseyCollege

Support our work: masseycollege.ca/support-us

You can find out more about the Massey Dialogues here. As for the college, it’s affiliated with the University of Toronto as per the information on the College’s Governance webpage.

Simon Fraser University (SFU; Vancouver, Canada) and a science communication course

I stumbled across “Telling Science Stories” being offered for SFU’s Spring 2022 semester in my twitter feed. Apparently there’s still space for students in the course.

I was a little surprised by how hard it was to find basic information such as: when does the course start? Yes, I found that and more, here’s what I managed to dig up,

From the PUB 480/877 Telling Science Stories course description webpage,

In this hands-on course, students will learn the value of sharing research knowledge beyond the university walls, along with the skills necessary to become effective science storytellers.

Climate change, vaccines, artificial intelligence, genetic editing — these are just a few examples of the essential role scientific evidence can play in society. But connecting science and society is no simple task: it requires key publishing and communication skills, as well as an understanding of the values, goals, and needs of the publics who stand to benefit from this knowledge.

This course will provide students with core skills and knowledge needed to share compelling science stories with diverse audiences, in a variety of formats. Whether it’s through writing books, podcasting, or creating science art, students will learn why we communicate science, develop an understanding of the core principles of effective audience engagement, and gain skills in publishing professional science content for print, radio, and online formats. The instructor is herself a science writer and communicator; in addition, students will have the opportunity to learn from a wide range of guest lecturers, including authors, artists, podcasters, and more. While priority will be given to students enrolled in the Publishing Minor, this course is open to all students who are interested in the evolving relationship between science and society.

I’m not sure if an outsider (someone who’s not a member of the SFU student body) can attend but it doesn’t hurt to ask.

The course is being given by Alice Fleerackers, here’s more from her profile page on the ScholCommLab (Scholarly Communications Laboratory) website,

Alice Fleerackers is a researcher and lab manager at the ScholCommLab and a doctoral student at Simon Fraser University’s Interdisciplinary Studies program, where she works under the supervision of Dr. Juan Pablo Alperin to explore how health science is communicated online. Her doctoral research is supported by a Joseph-Armand Bombardier Canada Graduate Scholarship from SSHRC and a Michael Stevenson Graduate Scholarship from SFU.

In addition, Alice volunteers with a number of non-profit organizations in an effort to foster greater public understanding and engagement with science. She is a Research Officer at Art the Science, Academic Liaison of Science Borealis, Board Member of the Science Writers and Communicators of Canada (SWCC), and a member of the Scientific Committee for the Public Communication of Science and Technology Network (PCST). She is also a freelance health and science writer whose work has appeared in the Globe and Mail, National Post, and Nautilus, among other outlets. Find her on Twitter at @FleerackersA.

Logistics such as when and where the course is being held (from the course outline webpage),

Telling Science Stories

Class Number: 4706

Delivery Method: In Person

Course Times + Location: Tu, Th 10:30 AM – 12:20 PM
HCC 2540, Vancouver

Instructor: Alice Fleerackers
afleerac@sfu.ca

According to the Spring 2022 Calendar Academic Dates webpage, the course starts on Monday, January 10, 2021 and I believe the room number (HCC2540) means the course will be held at SFU’s downtown Vancouver site at Harbour Centre, 515 West Hastings Street.

Given that SFU claims to be “Canada’s leading engaged university,” they do a remarkably poor job of actually engaging with anyone who’s not member of the community, i.e., an outsider.

FrogHeart casts an eye back to 2021 then looks forward to 2022 and contronyms

Casting an eye back isn’t one of my strong points. Thankfully I can’t be forced into making a top 10 list of some kind. Should someone be deeply disappointed (tongue in cheek) that I failed to mention one of the big 2021 stories featured here, please leave a note in the Comments for this blog and I’ll do my best to add it.

Note: I very rarely feature space exploration unless there’s a nanotechnology or other emerging technology angle to it. There are a lot of people who do a much better job of covering space exploration than I can. (If you’re interested in an overview from a Canadian on the international race to space, you can start with this December 29, 2021 posting “Looking back at a booming year in space” by Bob McDonald of CBC’s [Canadian Broadcasting Corporation] Quirks & Quarks science radio programme.)

Now, onto FrogHeart’s latest year.

2021

One of the standout stories in 2020/21 here and many, many places was the rise of the biotechnology community in British Columbia and elsewhere in Canada. Lipid nanoparticles used in COVID-19 vaccines became far better known than they ever had before and AbCellera took the business world by storm as its founder became a COVID billionaire.

Here is a sampling of the BC biotechnology/COVID-19 stories featured here,

  • “Avo Media, Science Telephone, and a Canadian COVID-19 billionaire scientist” December 30, 2020 posting
  • “Why is Precision Nanosystems Inc. in the local (Vancouver, Canada) newspaper?” January 22, 2021 posting Note: The company is best known for its work on lipid nanoparticles
  • “mRNA, COVID-19 vaccines, treating genetic diseases before birth, and the scientist who started it all” March 5, 2021 posting Note: This posting also notes a Canadian connection in relation mRNA in the subsection titled “Entrepreneurs rush in”
  • “Getting erased from the mRNA/COVID-19 story” August 20, 2021 posting Note: This features a fascinating story from Nathan Vardi (for Forbes) of professional jealousies, competitiveness, and a failure to recognize opportunity when she comes visiting.
  • “Who’s running the life science companies’ public relations campaign in British Columbia (Vancouver, Canada)?” August 23, 2021 posting Note: This explores the biotech companies, the network, and provincial and federal funding, as well as, municipal (City of Vancouver) support and more.

Sadly, I did not have time to feature this September 14, 2021 article (The tangled history of mRNA vaccines; Hundreds of scientists had worked on mRNA vaccines for decades before the coronavirus pandemic brought a breakthrough.) by Elie Dolgin for Nature magazine.

Dolgin starts the story in 1987 and covers many players that were new to me although I did recognize some of the more recent and Canadian players such as Pieter Cullis and Ian MacLachlan. *ETA January 3 ,2021: Cullis and MacLachlan are both mentioned in my ‘Getting erased ..” August 20, 2021 posting.* Fun fact: Pieter Cullis was just named an Officer to the Order of Canada (from the Governor General’s December 29, 2021 news release),

Pieter Cullis, O.C.
Vancouver, British Columbia

For his contributions to the advancement of biomedical research and drug development, and for his mentorship of the next generation of scientists and entrepreneurs.

Back to this roundup, I got interested in greener lithium mining, given its importance for batteries in electric vehicles and elsewhere,

2021 seems to have been the year when the science community started podcasting in a big way. Either the podcast was started this year or I stumbled across it this year (meaning it’s likely a podcast that is getting publicized because they had a good first year and they want more listeners for their second year),

  • “New podcast—Mission: Interplanetary and Event Rap: a one-stop custom rap shop Kickstarter” April 30, 2021 posting
  • “Superstar engineers and fantastic fiction writers podcast series” June 28, 2021 posting
  • “Periodically Political: a Canadian podcast from Elect STEM” August 16, 2021 posting
  • “Unlocking Science: a new podcast series launches on November 16, 2021” November 16, 2021 posting
  • “Lost Women of Science” December 2, 2021 posting
  • “Nerdin’ About and Science Diction: a couple of science podcasts” Note: Not posted but maybe one day. Meanwhile, here they are:
    • Nerdin’ About describes itself as, “… a podcast where passionate nerds tell us about their research, their interests, and what they’ve been Nerdin’ About lately. A spin-off of Nerd Nite Vancouver, a community lecture series held in a bar, Nerdin’ About is here to explore these questions with you. Hosted by rat researcher Kaylee Byers (she/her) and astronomy educator Michael Unger (he/him). Elise Lane (she/her) is our Mixing Engineer. Music by Jay Arner. Artwork by Armin Mortazavi.”
    • Science Diction is a podcast offshoot of Science Friday (SciFri), a US National Public Radio (NPR) programme. “… Hosted by SciFri producer and self-proclaimed word nerd Johanna Mayer, each episode of Science Diction digs into the origin of a single word or phrase, and, with the help of historians, authors, etymologists, and scientists, reveals a surprising science connection. Did you know the origin of the word meme has more to do with evolutionary biology than lolcats? Or that the element cobalt takes its name from a very cheeky goblin from German folklore? …”
  • Podcast episode from the Imperial College London features women’s hearts, psychedelic worldviews, and nanotechnology for children” Note: Not posted but maybe one day.
  • Alberta-based podcast explores AI (Artificial Intelligence)” Note 1: You’ll find season one and two on the page I’ve linked to; just keep scrolling. Note 2: Not posted but maybe one day.
  • Own the Science Podcast/À vous la science balado” Note: Not posted but maybe one day.

Integrating the body with machines is an ongoing interest of mine, these particular 2021 postings stood out but there are other postings (click on the Human Enhancement category or search the tag ‘machine/flesh’),

I wrote a few major (long) pieces this year,

  • “Interior Infinite: carnival & chaos, a June 26 – September 5, 2021 show at Polygon Art Gallery (North Vancouver, Canada)” July 26, 2021 posting Note: While this isn’t an art/sci posting it does touch on a topic near and dear to my heart, writers. In particular, the literary theorist, Mikhail Mikhailovich Bakhtin.
  • “The metaverse or not” October 22, 2021 posting Note: What can I say? The marketing hype got to me.
  • “True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)” December 3, 2021 posting

2022 and contronyms

I don’t make psychic predictions. As far as I’m concerned, 2022 will be a continuation of 2021, albeit with a few surprises.

My focus on nanotechnology and emerging technologies will remain. I expect artificial intelligence, CRISPR and gene editing (in general), quantum computing (technical work and commercialization), and neuromorphic computing will continue to make news. As for anything else, well, it wouldn’t be a surprise if you knew it was coming.

With regard to this blog, I keep thinking about cutting back so I can focus on other projects. Whether I finally follow through this year is a mystery to me.

Because words and writing are important to me, I’d like to end the year with this, which I found in early December 2021. From “25 Words That Are Their Own Opposites” on getpocket.com by Judith Herman originally written for “Mental Floss and … published June 15, 2018,”

Here’s an ambiguous sentence for you: “Because of the agency’s oversight, the corporation’s behavior was sanctioned.” Does that mean, “Because the agency oversaw the company’s behavior, they imposed a penalty for some transgression,” or does it mean, “Because the agency was inattentive, they overlooked the misbehavior and gave it their approval by default”? We’ve stumbled into the looking-glass world of contronyms—words that are their own antonyms.

1. Sanction (via French, from Latin sanctio(n-), from sancire ‘ratify,’) can mean “give official permission or approval for (an action)” or conversely, “impose a penalty on.”

2. Oversight is the noun form of two verbs with contrary meanings, “oversee” and “overlook.” Oversee, from Old English ofersēon (“look at from above”) means “supervise” (medieval Latin for the same thing: super-, “over” plus videre, “to see.”) Overlook usually means the opposite: “to fail to see or observe; to pass over without noticing; to disregard, ignore.”

3. Left can mean either remaining or departed. If the gentlemen have withdrawn to the drawing room for after-dinner cigars, who’s left? (The gentlemen have left and the ladies are left.)

4. Dust, along with the next two words, is a noun turned into a verb meaning either to add or to remove the thing in question. Only the context will tell you which it is. When you dust are you applying dust or removing it? It depends whether you’re dusting the crops or the furniture.

The contronym (also spelled “contranym”) goes by many names, including auto-antonym, antagonym, enantiodrome, self-antonym, antilogy and Janus word (from the Roman god of beginnings and endings, often depicted with two faces looking in opposite directions). …

Herman made liberal use, which she acknowledged, of the Mark Nichol article/list, “75 Contronyms (Words with Contradictory Meanings)” on Daily Writing Tips (Note: Based on the ‘comments’, Nichol’s list appears to be have been posted sometime in 2011),

3. Bill: A payment, or an invoice for payment

4. Bolt: To secure, or to flee

46. Quantum: Significantly large, or a minuscule part

47. Quiddity: Essence, or a trifling point of contention

68. Trim: To decorate, or to remove excess from

69. Trip: A journey, or a stumble

Happy 2022!

Jean-Pierre Luminet awarded UNESCO’s Kalinga prize for Popularizing Science

Before getting to the news about Jean-Pierre Luminet, astrophysicist, poet, sculptor, and more, there’s the prize itself.

Established in 1951, a scant five years after UNESCO (United Nations Educational, Scientific and Cultural Organization) was founded in 1945, the Kalinga Prize for the Popularization of Science is the organization’s oldest prize. Here’s more from the UNESCO Kalinga Prize for the Popularization of Science webpage,

The UNESCO Kalinga Prize for the Popularization of Science is an international award to reward exceptional contributions made by individuals in communicating science to society and promoting the popularization of science. It is awarded to persons who have had a distinguished career as writer, editor, lecturer, radio, television, or web programme director, or film producer in helping interpret science, research and technology to the public. UNESCO Kalinga Prize winners know the potential power of science, technology, and research in improving public welfare, enriching the cultural heritage of nations and providing solutions to societal problems on the local, regional and global level.

The UNESCO Kalinga Prize for the Popularization of Science is UNESCO’s oldest prize, created in 1951 following a donation from Mr Bijoyanand Patnaik, Founder and President of the Kalinga Foundation(link is external) Trust in India. Today, the Prize is funded by the Kalinga Foundation Trust(link is external), the Government of the State of Orissa, India(link is external), and the Government of India (Department of Science and Technology(link is external)).

Jean-Pierre Luminet

From the November 4, 2021 UNESCO press release (also received via email),

French scientist and author Jean-Pierre Luminet will be awarded the 2021 UNESCO Kalinga Prize for the Popularization of Science. The prize-giving ceremony will take place online on 5 November as part of the celebration of World Science Day for Peace and Development.

An independent international jury selected Jean-Pierre Luminet recognizing his longstanding commitment to the popularization of science. Mr Luminet is a distinguished astrophysicist and cosmologist who has been promoting the values of scientific research through a wide variety of media: he has created popular science books and novels, beautifully illustrated exhibition catalogues, poetry, audiovisual materials for children and documentaries, notably “Du Big Bang au vivant” with Hubert Reeves. He is also an artist, engraver and sculptor and has collaborated with composers on musicals inspired by the sounds of the Universe.

His publications are model examples for communicating science to the public. Their scientific content is precise, rigorous and always state-of-the-art. He has written seven “scientific novels”, including “Le Secret de Copernic”, published in 2006. His recent book “Le destin de l’univers : trous noirs et énergie sombre”, about black holes and dark energy, was written for the general public and was praised for its outstanding scientific, historical, and literary qualities. Jean-Pierre Luminet’s work has been translated into a many languages including Chinese and Korean.

There is a page for Luminet in both the French language and English language wikipedias. If you have the language skills, you might want to check out the French language essay as I found it to be more stylishly written.

Compare,

De par ses activités de poète, essayiste, romancier et scénariste, dans une œuvre voulant lier science, histoire, musique et art, il est également Officier des Arts et des Lettres.

With,

… Luminet has written fifteen science books,[4] seven historical novels,[4] TV documentaries,[5] and six poetry collections. He is an artist, an engraver, a sculptor, and a musician.

My rough translation of the French,

As a poet, essayaist, novelist, and a screenwriter in a body of work that brings together science, history, music and art, he is truly someone who has enriched the French cultural inheritance (which is what it means to be an Officer of Arts and Letters or Officier des Arts et des Lettres; see English language entry for Ordre des Arts et des Lettres).

In any event, congratulations to M. Luminet.

Speed up your reading with an interactive typeface

A May 12, 2021 news item on ScienceDaily brings news of a technology that makes reading easier,

AdaptiFont has recently been presented at CHI, the leading Conference on Human Factors in Computing.

Language is without doubt the most pervasive medium for exchanging knowledge between humans. However, spoken language or abstract text need to be made visible in order to be read, be it in print or on screen.

How does the way a text looks affect its readability, that is, how it is being read, processed, and understood? A team at TU Darmstadt’s Centre for Cognitive Science investigated this question at the intersection of perceptual science, cognitive science, and linguistics. Electronic text is even more complex. Texts are read on different devices under different external conditions. And although any digital text is formatted initially, users might resize it on screen, change brightness and contrast of the display, or even select a different font when reading text on the web.

A May 12, 2021 Technische Universitat Darmstadt (Technical University of Damstadt; Germany) press release (also on EurekAlert) provides more detail,

The team of researchers from TU Darmstadt now developed a system that leaves font design to the user’s visual system. First, they needed to come up with a way of synthesizing new fonts. This was achieved by using a machine learning algorithm, which learned the structure of fonts analysing 25 popular and classic typefaces. The system is capable of creating an infinite number of new fonts that are any intermediate form of others – for example, visually halfway between Helvetica and Times New Roman.

Since some fonts may make it more difficult to read the text, they may slow the reader down. Other fonts may help the user read more fluently. Measuring reading speed, a second algorithm can now generate more typefaces that increase the reading speed.

In a laboratory experiment, in which users read texts over one hour, the research team showed that their algorithm indeed generates new fonts that increase individual user’s reading speed. Interestingly all readers had their own personalized font that made reading especially easy for them. However: This individual favorite typeface does not necessarily fit in all situations. “AdaptiFont therefore can be understood as a system which creates fonts for an individual dynamically and continuously while reading, which maximizes the reading speed at the time of use. This may depend on the content of the text, whether you are tired, or perhaps are using different display devices,” explains Professor Constantin A. Rothkopf, Centre for Cognitive Science und head of the institute of Psychology of Information Processing at TU Darmstadt.

The AdaptiFont system was recently presented to the scientific community at the Conference on Human Factors in Computing Systems (CHI). A patent application has been filed. Future possible applications are with all electronic devices on which text is read.

There’s a 5 minute video featuring the work and narration for a researcher who speaks very quickly,

Here’s a link to and a citation for the paper,

AdaptiFont: Increasing Individuals’ Reading Sp0eed with a Generative Font Model and Bayesian Optimization by Florian Kadner, Yannik Keller, Constantin Rothkopf. CHI ’21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems May 2021 Article No.: 585 Pages 1-11 DOI: https://doi.org/10.1145/3411764.3445140 Published: 06 May 2021

This paper is open access.

Artificial intelligence is not mentioned but it’s hard to believe that adaptive learning by the software is anything other than a form of AI.