Tag Archives: Janelle Shane

Speaking in Color, an AI-powered paint tool

This June 16, 2022 article by Jeff Beer for Fast Company took me in an unexpected direction but first, there’s this from Beer’s story,

If an architect wanted to create a building that matched the color of a New York City summer sunset, they’d have to pore over potentially hundreds of color cards designed for industry to get anything close, and still it’d be a tall order to find that exact match. But a new AI-powered, voice-controlled tool from Sherwin-Williams aims to change that.

The paint brand recently launched Speaking in Color, a tool that allows users to tell it about certain places, objects, or shades in order to arrive at that perfect color. You start with a broad description like, say, “New York City summer sunset,” and then fine tune from there once it responds with photos and other options with more in-depth preferences like “darker red,” “make it moodier,” or “add a sliver of sun,” until it’s done.

Developed with agency Wunderman Thompson, it’s a React web app that uses natural language to find your preferred color using both third-party and proprietary code. The tool’s custom algorithm allows you to tweak colors in a way that translates statements like “make it dimmer,” “add warmth,” or “more like the 1980s” into mathematical adjustments.

It seems to me Wunderman Thompson needs to rethink its Sherwin Williams Speaking in Color promotional video (it’s embedded with Beer’s June 16, 2022 article or you can find it here; scroll down about 50% of the way). You’ll note, the color prompts are not spoken; they’re in text, e.g., ‘crystal-clear Caribbean ocean’. So much for ‘speaking in color’ but the article aroused my curiosity which is how I found this May 19, 2017 article by Annalee Newitz for Ars Technica highlighting another color/AI project (Note: A link has been removed),

At some point, we’ve all wondered about the incredibly strange names for paint colors. Research scientist and neural network goofball Janelle Shane took the wondering a step further. Shane decided to train a neural network to generate new paint colors, complete with appropriate names. The results are possibly the greatest work of artificial intelligence I’ve seen to date.

Writes Shane on her Tumblr, “For this experiment, I gave the neural network a list of about 7,700 Sherwin-Williams paint colors along with their RGB values. (RGB = red, green, and blue color values.) Could the neural network learn to invent new paint colors and give them attractive names?”

Shane told Ars that she chose a neural network algorithm called char-rnn, which predicts the next character in a sequence. So basically the algorithm was working on two tasks: coming up with sequences of letters to form color names, and coming up with sequences of numbers that map to an RGB value. As she checked in on the algorithm’s progress, she found that it was able to create colors long before it could actually name them reliably.

The longer it processed the dataset, the closer the algorithm got to making legit color names, though they were still mostly surreal: “Soreer Gray” is a kind of greenish color, and “Sane Green” is a purplish blue. When Shane cranked up “creativity” on the algorithm’s output, it gave her a violet color called “Dondarf” and a Kelly green called “Bylfgoam Glosd.” After churning through several more iterations of this process, Shane was able to get the algorithm to recognize some basic colors like red and gray, “though not reliably,” because she also gets a sky blue called “Gray Pubic” and a dark green called “Stoomy Brown.”

Brown has since written a book about artificial intelligence (You Look Like a Thing and I Love You; How Artificial Intelligence Works and Why It’s Making the World a Weirder Place [2019]) and continues her investigations of AI. You can find her website and blog here and her Wikipedia entry here.

Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?

A couple of Australian academics have written a comment for the journal Nature, which bears the intriguing subtitle: “The patent system assumes that inventors are human. Inventions devised by machines require their own intellectual property law and an international treaty.” (For the curious, I’ve linked to a few of my previous posts touching on intellectual property [IP], specifically the patent’s fraternal twin, copyright at the end of this piece.)

Before linking to the comment, here’s the May 27, 2022 University of New South Wales (UNCSW) press release (also on EurekAlert but published May 30, 2022) which provides an overview of their thinking on the subject, Note: Links have been removed,

It’s not surprising these days to see new inventions that either incorporate or have benefitted from artificial intelligence (AI) in some way, but what about inventions dreamt up by AI – do we award a patent to a machine?

This is the quandary facing lawmakers around the world with a live test case in the works that its supporters say is the first true example of an AI system named as the sole inventor.

In commentary published in the journal Nature, two leading academics from UNSW Sydney examine the implications of patents being awarded to an AI entity.

Intellectual Property (IP) law specialist Associate Professor Alexandra George and AI expert, Laureate Fellow and Scientia Professor Toby Walsh argue that patent law as it stands is inadequate to deal with such cases and requires legislators to amend laws around IP and patents – laws that have been operating under the same assumptions for hundreds of years.

The case in question revolves around a machine called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) created by Dr Stephen Thaler, who is president and chief executive of US-based AI firm Imagination Engines. Dr Thaler has named DABUS as the inventor of two products – a food container with a fractal surface that helps with insulation and stacking, and a flashing light for attracting attention in emergencies.

For a short time in Australia, DABUS looked like it might be recognised as the inventor because, in late July 2021, a trial judge accepted Dr Thaler’s appeal against IP Australia’s rejection of the patent application five months earlier. But after the Commissioner of Patents appealed the decision to the Full Court of the Federal Court of Australia, the five-judge panel upheld the appeal, agreeing with the Commissioner that an AI system couldn’t be named the inventor.

A/Prof. George says the attempt to have DABUS awarded a patent for the two inventions instantly creates challenges for existing laws which has only ever considered humans or entities comprised of humans as inventors and patent-holders.

“Even if we do accept that an AI system is the true inventor, the first big problem is ownership. How do you work out who the owner is? An owner needs to be a legal person, and an AI is not recognised as a legal person,” she says.

Ownership is crucial to IP law. Without it there would be little incentive for others to invest in the new inventions to make them a reality.

“Another problem with ownership when it comes to AI-conceived inventions, is even if you could transfer ownership from the AI inventor to a person: is it the original software writer of the AI? Is it a person who has bought the AI and trained it for their own purposes? Or is it the people whose copyrighted material has been fed into the AI to give it all that information?” asks A/Prof. George.

For obvious reasons

Prof. Walsh says what makes AI systems so different to humans is their capacity to learn and store so much more information than an expert ever could. One of the requirements of inventions and patents is that the product or idea is novel, not obvious and is useful.

“There are certain assumptions built into the law that an invention should not be obvious to a knowledgeable person in the field,” Prof. Walsh says.

“Well, what might be obvious to an AI won’t be obvious to a human because AI might have ingested all the human knowledge on this topic, way more than a human could, so the nature of what is obvious changes.”

Prof. Walsh says this isn’t the first time that AI has been instrumental in coming up with new inventions. In the area of drug development, a new antibiotic was created in 2019 – Halicin – that used deep learning to find a chemical compound that was effective against drug-resistant strains of bacteria.

“Halicin was originally meant to treat diabetes, but its effectiveness as an antibiotic was only discovered by AI that was directed to examine a vast catalogue of drugs that could be repurposed as antibiotics. So there’s a mixture of human and machine coming into this discovery.”

Prof. Walsh says in the case of DABUS, it’s not entirely clear whether the system is truly responsible for the inventions.

“There’s lots of involvement of Dr Thaler in these inventions, first in setting up the problem, then guiding the search for the solution to the problem, and then interpreting the result,” Prof. Walsh says.

“But it’s certainly the case that without the system, you wouldn’t have come up with the inventions.”

Change the laws

Either way, both authors argue that governing bodies around the world will need to modernise the legal structures that determine whether or not AI systems can be awarded IP protection. They recommend the introduction of a new ‘sui generis’ form of IP law – which they’ve dubbed ‘AI-IP’ – that would be specifically tailored to the circumstances of AI-generated inventiveness. This, they argue, would be more effective than trying to retrofit and shoehorn AI-inventiveness into existing patent laws.

Looking forward, after examining the legal questions around AI and patent law, the authors are currently working on answering the technical question of how AI is going to be inventing in the future.

Dr Thaler has sought ‘special leave to appeal’ the case concerning DABUS to the High Court of Australia. It remains to be seen whether the High Court will agree to hear it. Meanwhile, the case continues to be fought in multiple other jurisdictions around the world.

Here’s a link to and a citation for the paper,

Artificial intelligence is breaking patent law by Alexandra George & Toby Walsh. Nature (Nature) COMMENT ISSN 1476-4687 (online) 24 May 2022 ISSN 0028-0836 (print) Vol 605 26 May 2022 pp. 616-18 DOI: 10.1038/d41586-022-01391-x

This paper appears to be open access.

The Journey

DABIUS has gotten a patent in one jurisdiction, from an August 8, 2021 article on brandedequity.com,

The patent application listing DABUS as the inventor was filed in patent offices around the world, including the US, Europe, Australia, and South Afica. But only South Africa granted the patent (Australia followed suit a few days later after a court judgment gave the go-ahard [and rejected it several months later]).

Natural person?

This September 27, 2021 article by Miguel Bibe for Inventa covers some of the same ground adding some some discussion of the ‘natural person’ problem,

The patent is for “a food container based on fractal geometry”, and was accepted by the CIPC [Companies and Intellectual Property Commission] on June 24, 2021. The notice of issuance was published in the July 2021 “Patent Journal”.  

South Africa does not have a substantive patent examination system and, instead, requires applicants to merely complete a filing for their inventions. This means that South Africa patent laws do not provide a definition for “inventor” and the office only proceeds with a formal examination in order to confirm if the paperwork was filled correctly.

… according to a press release issued by the University of Surrey: “While patent law in many jurisdictions is very specific in how it defines an inventor, the DABUS team is arguing that the status quo is not fit for purpose in the Fourth Industrial Revolution.”

On the other hand, this may not be considered as a victory for the DABUS team since several doubts and questions remain as to who should be considered the inventor of the patent. Current IP laws in many jurisdictions follow the traditional term of “inventor” as being a “natural person”, and there is no legal precedent in the world for inventions created by a machine.

August 2022 update

Mike Masnick in an August 15, 2022 posting on Techdirt provides the latest information on Stephen Thaler’s efforts to have patents and copyrights awarded to his AI entity, DABUS,

Stephen Thaler is a man on a mission. It’s not a very good mission, but it’s a mission. He created something called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) and claims that it’s creating things, for which he has tried to file for patents and copyrights around the globe, with his mission being to have DABUS named as the inventor or author. This is dumb for many reasons. The purpose of copyright and patents are to incentivize the creation of these things, by providing to the inventor or author a limited time monopoly, allowing them to, in theory, use that monopoly to make some money, thereby making the entire inventing/authoring process worthwhile. An AI doesn’t need such an incentive. And this is why patents and copyright only are given to persons and not animals or AI.

… Thaler’s somewhat quixotic quest continues to fail. The EU Patent Office rejected his application. The Australian patent office similarly rejected his request. In that case, a court sided with Thaler after he sued the Australian patent office, and said that his AI could be named as an inventor, but thankfully an appeals court set aside that ruling a few months ago. In the US, Thaler/DABUS keeps on losing as well. Last fall, he lost in court as he tried to overturn the USPTO ruling, and then earlier this year, the US Copyright Office also rejected his copyright attempt (something it has done a few times before). In June, he sued the Copyright Office over this, which seems like a long shot.

And now, he’s also lost his appeal of the ruling in the patent case. CAFC, the Court of Appeals for the Federal Circuit — the appeals court that handles all patent appeals — has rejected Thaler’s request just like basically every other patent and copyright office, and nearly all courts.

If you have the time, the August 15, 2022 posting is an interesting read.

Consciousness and ethical AI

Just to make things more fraught, an engineer at Google has claimed that one of their AI chatbots has consciousness. From a June 16, 2022 article (in Canada’s National Post [previewed on epaper]) by Patrick McGee,

Google has ignited a social media firestorm on the the nature of consciousness after placing an engineer on paid leave with his belief that the tech group’s chatbot has become “sentient.”

Blake Lemoine, a senior software engineer in Google’s Responsible AI unit, did not receive much attention when he wrote a Medium post saying he “may be fired soon for doing AI ethics work.”

But a Saturday [June 11, 2022] profile in the Washington Post characterized Lemoine as “the Google engineer who thinks “the company’s AI has come to life.”

This is not the first time that Google has run into a problem with ethics and AI. Famously, Timnit Gebru who co-led (with Margaret Mitchell) Google’s ethics and AI unit departed in 2020. Gebru said (and maintains to this day) she was fired. They said she was ?, they never did make a final statement although after an investigation Gebru did receive an apology. You *can* read more about Gebru and the issues she brought to light in her Wikipedia entry. Coincidentally (or not), Margaret Mitchell was terminated/fired in February 2021 from Google after criticizing the company for Gebru’s ‘firing’. See a February 19, 2021 article by Megan Rose Dickey for TechCrunch for details about what the company has admitted is a firing or Margaret Mitchell’s termination from the company.

Getting back intellectual property and AI.

What about copyright?

There are no mentions of copyright in the earliest material I have here about the ‘creative’ arts and artificial intelligence is this, “Writing and AI or is a robot writing this blog?” posted July 16, 2014. More recently, there’s “Beer and wine reviews, the American Chemical Society’s (ACS) AI editors, and the Turing Test” posted May 20, 2022. The type of writing featured is not literary or typically considered creative writing.

On the more creative front, there’s “True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)” posted on December 3, 2021. The literary/creative portion of the post can be found under the ‘AI and creativity’ subhead approximately 30% of the way down and where I mention Douglas Coupland. Again, there’s no mention of copyright.

It’s with the visual arts that copyright gets mentioned. The first one I can find here is “Robot artists—should they get copyright protection” posted on July 10, 2017.

Fun fact: Andres Guadamuz who was mentioned in my posting took to his own blog where he gave my blog a shout out while implying that I wasn’t thoughtful. The gist of his August 8, 2017 posting was that he was misunderstood by many people, which led to the title for his post, “Should academics try to engage the public?” Thankfully, he soldiers on trying to educate us with his TechnoLama blog.

Lastly, there’s this August 16, 2019 posting “AI (artificial intelligence) artist got a show at a New York City art gallery” where you can scroll down to the ‘What about intellectual property?’ subhead about 80% of the way.

You look like a thing …

i am recommending a book for anyone who’d like to learn a little more about how artificial intelligence (AI) works, “You look like a thing and I love you; How Artificial Intelligence Works and Why It’s Making the World a Weirder Place” by Janelle Shane (2019).

It does not require an understanding of programming/coding/algorithms/etc.; Shane makes the subject as accessible as possible and gives you insight into why the term ‘artificial stupidity’ is more applicable than you might think. You can find Shane’s website here and you can find her 10 minute TED talk here.

*’can’ added to sentence on May 12, 2023.

Ghosts, mechanical turks, and pseudo-AI (artificial intelligence)—Is it all a con game?

There’s been more than one artificial intelligence (AI) story featured here on this blog but the ones featured in this posting are the first I’ve stumbled across that suggest the hype is even more exaggerated than even the most cynical might have thought. (BTW, the 2019 material is later as I have taken a chronological approach to this posting.)

It seems a lot of companies touting their AI algorithms and capabilities are relying on human beings to do the work, from a July 6, 2018 article by Olivia Solon for the Guardian (Note: A link has been removed),

It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.

“Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”.

“It’s essentially prototyping the AI with human beings,” he said.

In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them.

“I wonder if Expensify SmartScan users know MTurk workers enter their receipts,” said Rochelle LaPlante, a “Turker” and advocate for gig economy workers on Twitter. “I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.”

Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M.

In some cases, humans are used to train the AI system and improve its accuracy. …

The Turk

Fooling people with machines that seem intelligent is not new according to a Sept. 10, 2018 article by Seth Stevenson for Slate.com (Note: Links have been removed),

It’s 1783, and Paris is gripped by the prospect of a chess match. One of the contestants is François-André Philidor, who is considered the greatest chess player in Paris, and possibly the world. Everyone is so excited because Philidor is about to go head-to-head with the other biggest sensation in the chess world at the time.

But his opponent isn’t a man. And it’s not a woman, either. It’s a machine.

This story may sound a lot like Garry Kasparov taking on Deep Blue, IBM’s chess-playing supercomputer. But that was only a couple of decades ago, and this chess match in Paris happened more than 200 years ago. It doesn’t seem like a robot that can play chess would even be possible in the 1780s. This machine playing against Philidor was making an incredible technological leap—playing chess, and not only that, but beating humans at chess.

In the end, it didn’t quite beat Philidor, but the chess master called it one of his toughest matches ever. It was so hard for Philidor to get a read on his opponent, which was a carved wooden figure—slightly larger than life—wearing elaborate garments and offering a cold, mean stare.

It seems like the minds of the era would have been completely blown by a robot that could nearly beat a human chess champion. Some people back then worried that it was black magic, but many folks took the development in stride. …

Debates about the hottest topic in technology today—artificial intelligence—didn’t starts in the 1940s, with people like Alan Turing and the first computers. It turns out that the arguments about AI go back much further than you might imagine. The story of the 18th-century chess machine turns out to be one of those curious tales from history that can help us understand technology today, and where it might go tomorrow.

[In future episodes our podcast, Secret History of the Future] we’re going to look at the first cyberattack, which happened in the 1830s, and find out how the Victorians invented virtual reality.

Philidor’s opponent was known as The Turk or Mechanical Turk and that ‘machine’ was in fact a masterful hoax as The Turk held a hidden compartment from which a human being directed his moves.

People pretending to be AI agents

It seems that today’s AI has something in common with the 18th century Mechanical Turk, there are often humans lurking in the background making things work. From a Sept. 4, 2018 article by Janelle Shane for Slate.com (Note: Links have been removed),

Every day, people are paid to pretend to be bots.

In a strange twist on “robots are coming for my job,” some tech companies that boast about their artificial intelligence have found that at small scales, humans are a cheaper, easier, and more competent alternative to building an A.I. that can do the task.

Sometimes there is no A.I. at all. The “A.I.” is a mockup powered entirely by humans, in a “fake it till you make it” approach used to gauge investor interest or customer behavior. Other times, a real A.I. is combined with human employees ready to step in if the bot shows signs of struggling. These approaches are called “pseudo-A.I.” or sometimes, more optimistically, “hybrid A.I.”

Although some companies see the use of humans for “A.I.” tasks as a temporary bridge, others are embracing pseudo-A.I. as a customer service strategy that combines A.I. scalability with human competence. They’re advertising these as “hybrid A.I.” chatbots, and if they work as planned, you will never know if you were talking to a computer or a human. Every remote interaction could turn into a form of the Turing test. So how can you tell if you’re dealing with a bot pretending to be a human or a human pretending to be a bot?

One of the ways you can’t tell anymore is by looking for human imperfections like grammar mistakes or hesitations. In the past, chatbots had prewritten bits of dialogue that they could mix and match according to built-in rules. Bot speech was synonymous with precise formality. In early Turing tests, spelling mistakes were often a giveaway that the hidden speaker was a human. Today, however, many chatbots are powered by machine learning. Instead of using a programmer’s rules, these algorithms learn by example. And many training data sets come from services like Amazon’s Mechanical Turk, which lets programmers hire humans from around the world to generate examples of tasks like asking and answering questions. These data sets are usually full of casual speech, regionalisms, or other irregularities, so that’s what the algorithms learn. It’s not uncommon these days to get algorithmically generated image captions that read like text messages. And sometimes programmers deliberately add these things in, since most people don’t expect imperfections of an algorithm. In May, Google’s A.I. assistant made headlines for its ability to convincingly imitate the “ums” and “uhs” of a human speaker.

Limited computing power is the main reason that bots are usually good at just one thing at a time. Whenever programmers try to train machine learning algorithms to handle additional tasks, they usually get algorithms that can do many tasks rather badly. In other words, today’s algorithms are artificial narrow intelligence, or A.N.I., rather than artificial general intelligence, or A.G.I. For now, and for many years in the future, any algorithm or chatbot that claims A.G.I-level performance—the ability to deal sensibly with a wide range of topics—is likely to have humans behind the curtain.

Another bot giveaway is a very poor memory. …

Bringing AI to life: ghosts

Sidney Fussell’s April 15, 2019 article for The Atlantic provides more detail about the human/AI interface as found in some Amazon products such as Alexa ( a voice-control system),

… Alexa-enabled speakers can and do interpret speech, but Amazon relies on human guidance to make Alexa, well, more human—to help the software understand different accents, recognize celebrity names, and respond to more complex commands. This is true of many artificial intelligence–enabled products. They’re prototypes. They can only approximate their promised functions while humans help with what Harvard researchers have called “the paradox of automation’s last mile.” Advancements in AI, the researchers write, create temporary jobs such as tagging images or annotating clips, even as the technology is meant to supplant human labor. In the case of the Echo, gig workers are paid to improve its voice-recognition software—but then, when it’s advanced enough, it will be used to replace the hostess in a hotel lobby.

A 2016 paper by researchers at Stanford University used a computer vision system to infer, with 88 percent accuracy, the political affiliation of 22 million people based on what car they drive and where they live. Traditional polling would require a full staff, a hefty budget, and months of work. The system completed the task in two weeks. But first, it had to know what a car was. The researchers paid workers through Amazon’s Mechanical Turk [emphasis mine] platform to manually tag thousands of images of cars, so the system would learn to differentiate between shapes, styles, and colors.

It may be a rude awakening for Amazon Echo owners, but AI systems require enormous amounts of categorized data, before, during, and after product launch. ..,

Isn’t interesting that Amazon also has a crowdsourcing marketplace for its own products. Calling it ‘Mechanical Turk’ after a famous 18th century hoax would suggest a dark sense of humour somewhere in the corporation. (You can find out more about the Amazon Mechanical Turk on this Amazon website and in its Wikipedia entry.0

Anthropologist, Mary L. Gray has coined the phrase ‘ghost work’ for the work that humans perform but for which AI gets the credit. Angela Chan’s May 13, 2019 article for The Verge features Gray as she promotes her latest book with Siddarth Suri ‘Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass’ (Note: A link has been removed),

“Ghost work” is anthropologist Mary L. Gray’s term for the invisible labor that powers our technology platforms. When Gray, a senior researcher at Microsoft Research, first arrived at the company, she learned that building artificial intelligence requires people to manage and clean up data to feed to the training algorithms. “I basically started asking the engineers and computer scientists around me, ‘Who are the people you pay to do this task work of labeling images and classification tasks and cleaning up databases?’” says Gray. Some people said they didn’t know. Others said they didn’t want to know and were concerned that if they looked too closely they might find unsavory working conditions.

So Gray decided to find out for herself. Who are the people, often invisible, who pick up the tasks necessary for these platforms to run? Why do they do this work, and why do they leave? What are their working conditions?

The interview that follows is interesting although it doesn’t seem to me that the question about working conditions is answered in any great detail. However, there is this rather interesting policy suggestion,

If companies want to happily use contract work because they need to constantly churn through new ideas and new aptitudes, the only way to make that a good thing for both sides of that enterprise is for people to be able to jump into that pool. And people do that when they have health care and other provisions. This is the business case for universal health care, for universal education as a public good. It’s going to benefit all enterprise.

I want to get across to people that, in a lot of ways, we’re describing work conditions. We’re not describing a particular type of work. We’re describing today’s conditions for project-based task-driven work. This can happen to everybody’s jobs, and I hate that that might be the motivation because we should have cared all along, as this has been happening to plenty of people. For me, the message of this book is: let’s make this not just manageable, but sustainable and enjoyable. Stop making our lives wrap around work, and start making work serve our lives.

Puts a different spin on AI and work, doesn’t it?

Machine learning, neural networks, and knitting

In a recent (Tuesday, March 6, 2018) live stream ‘conversation’ (‘Science in Canada; Investing in Canadian Innovation’ now published on YouTube) between Canadian Prime Minister, Justin Trudeau, and US science communicator, Bill Nye, at the University of Ottawa, they discussed, amongst many other topics, what AI (artificial intelligence) can and can’t do. They seemed to agree that AI can’t be creative, i.e., write poetry, create works of art, make jokes, etc. A conclusion which is both (in my opinion) true and not true.

There are times when I think the joke may be on us (humans). Take for example this March 6, 2018 story by Alexis Madrigal for The Atlantic magazine (Note: Links have been removed),

SkyKnit: How an AI Took Over an Adult Knitting Community

Ribald knitters teamed up with a neural-network creator to generate new types of tentacled, cozy shapes.

Janelle Shane is a humorist [Note: She describes herself as a “Research Scientist in optics. Plays with neural networks. …” in her Twitter bio.] who creates and mines her material from neural networks, the form of machine learning that has come to dominate the field of artificial intelligence over the last half-decade.

Perhaps you’ve seen the candy-heart slogans she generated for Valentine’s Day: DEAR ME, MY MY, LOVE BOT, CUTE KISS, MY BEAR, and LOVE BUN.

Or her new paint-color names: Parp Green, Shy Bather, Farty Red, and Bull Cream.

Or her neural-net-generated Halloween costumes: Punk Tree, Disco Monster, Spartan Gandalf, Starfleet Shark, and A Masked Box.

Her latest project, still ongoing, pushes the joke into a new, physical realm. Prodded by a knitter on the knitting forum Ravelry, Shane trained a type of neural network on a series of over 500 sets of knitting instructions. Then, she generated new instructions, which members of the Ravelry community have actually attempted to knit.

“The knitting project has been a particularly fun one so far just because it ended up being a dialogue between this computer program and these knitters that went over my head in a lot of ways,” Shane told me. “The computer would spit out a whole bunch of instructions that I couldn’t read and the knitters would say, this is the funniest thing I’ve ever read.”

It appears that the project evolved,

The human-machine collaboration created configurations of yarn that you probably wouldn’t give to your in-laws for Christmas, but they were interesting. The user citikas was the first to post a try at one of the earliest patterns, “reverss shawl.” It was strange, but it did have some charisma.

Shane nicknamed the whole effort “Project Hilarious Disaster.” The community called it SkyKnit.

I’m not sure what’s meant by “community” as mentioned in the previous excerpt. Are we talking about humans only, AI only, or both humans and AI?

Here’s some of what underlies Skyknit (Note: Links have been removed),

The different networks all attempt to model the data they’ve been fed by tuning a vast, funky flowchart. After you’ve created a statistical model that describes your real data, you can also roll the dice and generate new, never-before-seen data of the same kind.

How this works—like, the math behind it—is very hard to visualize because values inside the model can have hundreds of dimensions and we are humble three-dimensional creatures moving through time. But as the neural-network enthusiast Robin Sloan puts it, “So what? It turns out imaginary spaces are useful even if you can’t, in fact, imagine them.”

Out of that ferment, a new kind of art has emerged. Its practitioners use neural networks not to attain practical results, but to see what’s lurking in the these vast, opaque systems. What did the machines learn about the world as they attempted to understand the data they’d been fed? Famously, Google released DeepDream, which produced trippy visualizations that also demonstrated how that type of neural network processed the textures and objects in its source imagery.

Madrigal’s article is well worth reading if you have the time. You can also supplement Madrigal’s piece with an August 9, 2017 article about Janelle Shane’s algorithmic experiments by Jacob Brogan for slate.com.

I found some SkyKnit examples on Ravelry including this one from the Dollybird Workshop,

© Chatelaine

SkyKnit fancy addite rifopshent
by SkyKnit
Published in
Dollybird Workshop
SkyKnit
Craft
Knitting
Category
Stitch pattern
Published
February 2018
Suggested yarn
Yarn weight
Fingering (14 wpi) ?
Gauge
24 stitches and 30 rows = 4 inches
in stockinette stitch
Needle size
US 4 – 3.5 mm

written-pattern

This pattern is available as a free Ravelry download

SkyKnit is a type of machine learning algorithm called an artificial neural network. Its creator, Janelle Shane of AIweirdness.com, gave it 88,000 lines of knitting instructions from Stitch-Maps.com and Ravelry, and it taught itself how to make new patterns. Join the discussion!

SkyKnit seems to have created something that has paralell columns, and is reversible. Perhaps a scarf?

Test-knitting & image courtesy of Chatelaine

Patterns may include notes from testknitters; yarn, needles, and gauge are totally at your discretion.

About the designer
SkyKnit’s favorites include lace, tentacles, and totally not the elimination of the human race.
For more information, see: http://aiweirdness.com/

Shane’s website, aiweirdness.com, is where she posts musings such as this (from a March 2, [?] 2018 posting), Note: A link has been removed,

If you’ve been on the internet today, you’ve probably interacted with a neural network. They’re a type of machine learning algorithm that’s used for everything from language translation to finance modeling. One of their specialties is image recognition. Several companies – including Google, Microsoft, IBM, and Facebook – have their own algorithms for labeling photos. But image recognition algorithms can make really bizarre mistakes.

image

Microsoft Azure’s computer vision API [application programming interface] added the above caption and tags. But there are no sheep in the image of above. None. I zoomed all the way in and inspected every speck.

….

I have become quite interested in Shane’s self descriptions such as this one from the aiweirdness.com website,

Portrait/Logo

About

I train neural networks, a type of machine learning algorithm, to write unintentional humor as they struggle to imitate human datasets. Well, I intend the humor. The neural networks are just doing their best to understand what’s going on. Currently located on the occupied land of the Arapahoe Nation.
https://wandering.shop/@janellecshane

As for the joke being on us, I can’t help remembering the Facebook bots that developed their own language (Facebotlish), and were featured in my June 30, 2017 posting, There’s a certain eerieness to it all, which seems an appropriate response in a year celebrating the 200th anniversary of Mary Shelley’s 1818 book, Frankenstein; or, the Modern Prometheus. I’m closing with a video clip from the 1931 movie,

Happy Weekend!