Tag Archives: Bots

A potpourri of robot/AI stories: killers , kindergarten teachers, a Balenciaga-inspired AI fashion designer, a conversational android, and more

Following on my August 29, 2018 post (Sexbots, sexbot ethics, families, and marriage), I’m following up with a more general piece.

Robots, AI (artificial intelligence), and androids (humanoid robots), the terms can be confusing since there’s a tendency to use them interchangeably. Confession: I do it too, but, not this time. That said, I have multiple news bits.

Killer ‘bots and ethics

The U.S. military is already testing a Modular Advanced Armed Robotic System. Credit: Lance Cpl. Julien Rodarte, U.S. Marine Corps

That is a robot.

For the purposes of this posting, a robot is a piece of hardware which may or may not include an AI system and does not mimic a human or other biological organism such that you might, under circumstances, mistake the robot for a biological organism.

As for what precipitated this feature (in part), it seems there’s been a United Nations meeting in Geneva, Switzerland held from August 27 – 31, 2018 about war and the use of autonomous robots, i.e., robots equipped with AI systems and designed for independent action. BTW, it’s the not first meeting the UN has held on this topic.

Bonnie Docherty, lecturer on law and associate director of armed conflict and civilian protection, international human rights clinic, Harvard Law School, has written an August 21, 2018 essay on The Conversation (also on phys.org) describing the history and the current rules around the conduct of war, as well as, outlining the issues with the military use of autonomous robots (Note: Links have been removed),

When drafting a treaty on the laws of war at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language.

This standard, known as the Martens Clause, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”

I was the lead author of a new report by Human Rights Watch and the Harvard Law School International Human Rights Clinic that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these weapons.

Representatives of more than 70 nations will gather from August 27 to 31 [2018] at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the Convention on Conventional Weapons, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.

Docherty elaborates on her points (Note: A link has been removed),

The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.

Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are all working to develop them. They argue that the technology would process information faster and keep soldiers off the battlefield.

The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.

I encourage you to read the essay in its entirety and for anyone who thinks the discussion about ethics and killer ‘bots is new or limited to military use, there’s my July 25, 2016 posting about police use of a robot in Dallas, Texas. (I imagine the discussion predates 2016 but that’s the earliest instance I have here.)

Teacher bots

Robots come in many forms and this one is on the humanoid end of the spectum,

Children watch a Keeko robot at the Yiswind Institute of Multicultural Education in Beijing, where the intelligent machines are telling stories and challenging kids with logic problems  [donwloaded from https://phys.org/news/2018-08-robot-teachers-invade-chinese-kindergartens.html]

Don’t those ‘eyes’ look almost heart-shaped? No wonder the kids love these robots, if an August  29, 2018 news item on phys.org can be believed,

The Chinese kindergarten children giggled as they worked to solve puzzles assigned by their new teaching assistant: a roundish, short educator with a screen for a face.

Just under 60 centimetres (two feet) high, the autonomous robot named Keeko has been a hit in several kindergartens, telling stories and challenging children with logic problems.

Round and white with a tubby body, the armless robot zips around on tiny wheels, its inbuilt cameras doubling up both as navigational sensors and a front-facing camera allowing users to record video journals.

In China, robots are being developed to deliver groceries, provide companionship to the elderly, dispense legal advice and now, as Keeko’s creators hope, join the ranks of educators.

At the Yiswind Institute of Multicultural Education on the outskirts of Beijing, the children have been tasked to help a prince find his way through a desert—by putting together square mats that represent a path taken by the robot—part storytelling and part problem-solving.

Each time they get an answer right, the device reacts with delight, its face flashing heart-shaped eyes.

“Education today is no longer a one-way street, where the teacher teaches and students just learn,” said Candy Xiong, a teacher trained in early childhood education who now works with Keeko Robot Xiamen Technology as a trainer.

“When children see Keeko with its round head and body, it looks adorable and children love it. So when they see Keeko, they almost instantly take to it,” she added.

Keeko robots have entered more than 600 kindergartens across the country with its makers hoping to expand into Greater China and Southeast Asia.

Beijing has invested money and manpower in developing artificial intelligence as part of its “Made in China 2025” plan, with a Chinese firm last year unveiling the country’s first human-like robot that can hold simple conversations and make facial expressions.

According to the International Federation of Robots, China has the world’s top industrial robot stock, with some 340,000 units in factories across the country engaged in manufacturing and the automotive industry.

Moving on from hardware/software to a software only story.

AI fashion designer better than Balenciaga?

Despite the title for Katharine Schwab’s August 22, 2018 article for Fast Company, I don’t think this AI designer is better than Balenciaga but from the pictures I’ve seen the designs are as good and it does present some intriguing possibilities courtesy of its neural network (Note: Links have been removed),

The AI, created by researcher Robbie Barat, has created an entire collection based on Balenciaga’s previous styles. There’s a fabulous pink and red gradient jumpsuit that wraps all the way around the model’s feet–like a onesie for fashionistas–paired with a dark slouchy coat. There’s a textural color-blocked dress, paired with aqua-green tights. And for menswear, there’s a multi-colored, shimmery button-up with skinny jeans and mismatched shoes. None of these looks would be out of place on the runway.

To create the styles, Barat collected images of Balenciaga’s designs via the designer’s lookbooks, ad campaigns, runway shows, and online catalog over the last two months, and then used them to train the pix2pix neural net. While some of the images closely resemble humans wearing fashionable clothes, many others are a bit off–some models are missing distinct limbs, and don’t get me started on how creepy [emphasis mine] their faces are. Even if the outfits aren’t quite ready to be fabricated, Barat thinks that designers could potentially use a tool like this to find inspiration. Because it’s not constrained by human taste, style, and history, the AI comes up with designs that may never occur to a person. “I love how the network doesn’t really understand or care about symmetry,” Barat writes on Twitter.

You can see the ‘creepy’ faces and some of the designs here,

Image: Robbie Barat

In contrast to the previous two stories, this all about algorithms, no machinery with independent movement (robot hardware) needed.

Conversational android: Erica

Hiroshi Ishiguro and his lifelike (definitely humanoid) robots have featured here many, many times before. The most recent posting is a March 27, 2017 posting about his and his android’s participation at the 2017 SXSW festival.

His latest work is featured in an August 21, 2018 news news item on ScienceDaily,

We’ve all tried talking with devices, and in some cases they talk back. But, it’s a far cry from having a conversation with a real person.

Now a research team from Kyoto University, Osaka University, and the Advanced Telecommunications Research Institute, or ATR, have significantly upgraded the interaction system for conversational android ERICA, giving her even greater dialog skills.

ERICA is an android created by Hiroshi Ishiguro of Osaka University and ATR, specifically designed for natural conversation through incorporation of human-like facial expressions and gestures. The research team demonstrated the updates during a symposium at the National Museum of Emerging Science in Tokyo.

Here’s the latest conversational android, Erica

Caption: The experimental set up when the subject (left) talks with ERICA (right) Credit: Kyoto University / Kawahara lab

An August 20, 2018 Kyoto University press release on EurekAlert, which originated the news item, offers more details,

When we talk to one another, it’s never a simple back and forward progression of information,” states Tatsuya Kawahara of Kyoto University’s Graduate School of Informatics, and an expert in speech and audio processing.

“Listening is active. We express agreement by nodding or saying ‘uh-huh’ to maintain the momentum of conversation. This is called ‘backchanneling’, and is something we wanted to implement with ERICA.”

The team also focused on developing a system for ‘attentive listening’. This is when a listener asks elaborating questions, or repeats the last word of the speaker’s sentence, allowing for more engaging dialogue.

Deploying a series of distance sensors, facial recognition cameras, and microphone arrays, the team began collecting data on parameters necessary for a fluid dialog between ERICA and a human subject.

“We looked at three qualities when studying backchanneling,” continues Kawahara. “These were: timing — when a response happens; lexical form — what is being said; and prosody, or how the response happens.”

Responses were generated through machine learning using a counseling dialogue corpus, resulting in dramatically improved dialog engagement. Testing in five-minute sessions with a human subject, ERICA demonstrated significantly more dynamic speaking skill, including the use of backchanneling, partial repeats, and statement assessments.

“Making a human-like conversational robot is a major challenge,” states Kawahara. “This project reveals how much complexity there is in listening, which we might consider mundane. We are getting closer to a day where a robot can pass a Total Turing Test.”

Erica seems to have been first introduced publicly in Spring 2017, from an April 2017 Erica: Man Made webpage on The Guardian website,

Erica is 23. She has a beautiful, neutral face and speaks with a synthesised voice. She has a degree of autonomy – but can’t move her hands yet. Hiroshi Ishiguro is her ‘father’ and the bad boy of Japanese robotics. Together they will redefine what it means to be human and reveal that the future is closer than we might think.

Hiroshi Ishiguro and his colleague Dylan Glas are interested in what makes a human. Erica is their latest creation – a semi-autonomous android, the product of the most funded scientific project in Japan. But these men regard themselves as artists more than scientists, and the Erica project – the result of a collaboration between Osaka and Kyoto universities and the Advanced Telecommunications Research Institute International – is a philosophical one as much as technological one.

Erica is interviewed about her hope and dreams – to be able to leave her room and to be able to move her arms and legs. She likes to chat with visitors and has one of the most advanced speech synthesis systems yet developed. Can she be regarded as being alive or as a comparable being to ourselves? Will she help us to understand ourselves and our interactions as humans better?

Erica and her creators are interviewed in the science fiction atmosphere of Ishiguro’s laboratory, and this film asks how we might form close relationships with robots in the future. Ishiguro thinks that for Japanese people especially, everything has a soul, whether human or not. If we don’t understand how human hearts, minds and personalities work, can we truly claim that humans have authenticity that machines don’t?

Ishiguro and Glas want to release Erica and her fellow robots into human society. Soon, Erica may be an essential part of our everyday life, as one of the new children of humanity.

Key credits

  • Director/Editor: Ilinca Calugareanu
  • Producer: Mara Adina
  • Executive producers for the Guardian: Charlie Phillips and Laurence Topham
  • This video is produced in collaboration with the Sundance Institute Short Documentary Fund supported by the John D and Catherine T MacArthur Foundation

You can also view the 14 min. film here.

Artworks generated by an AI system are to be sold at Christie’s auction house

KC Ifeanyi’s August 22, 2018 article for Fast Company may send a chill down some artists’ spines,

For the first time in its 252-year history, Christie’s will auction artwork generated by artificial intelligence.

Created by the French art collective Obvious, “Portrait of Edmond de Belamy” is part of a series of paintings of the fictional Belamy family that was created using a two-part algorithm. …

The portrait is estimated to sell anywhere between $7,000-$10,000, and Obvious says the proceeds will go toward furthering its algorithm.

… Famed collector Nicolas Laugero-Lasserre bought one of Obvious’s Belamy works in February, which could’ve been written off as a novel purchase where the story behind it is worth more than the piece itself. However, with validation from a storied auction house like Christie’s, AI art could shake the contemporary art scene.

“Edmond de Belamy” goes up for auction from October 23-25 [2018].

Jobs safe from automation? Are there any?

Michael Grothaus expresses more optimism about future job markets than I’m feeling in an August 30, 2018 article for Fast Company,

A 2017 McKinsey Global Institute study of 800 occupations across 46 countries found that by 2030, 800 million people will lose their jobs to automation. That’s one-fifth of the global workforce. A further one-third of the global workforce will need to retrain if they want to keep their current jobs as well. And looking at the effects of automation on American jobs alone, researchers from Oxford University found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”

The good news is that while the above stats are rightly cause for concern, they also reveal that 53% of American jobs and four-fifths of global jobs are unlikely to be affected by advances in artificial intelligence and robotics. But just what are those fields? I spoke to three experts in artificial intelligence, robotics, and human productivity to get their automation-proof career advice.

Creatives

“Although I believe every single job can, and will, benefit from a level of AI or robotic influence, there are some roles that, in my view, will never be replaced by technology,” says Tom Pickersgill, …

Maintenance foreman

When running a production line, problems and bottlenecks are inevitable–and usually that’s a bad thing. But in this case, those unavoidable issues will save human jobs because their solutions will require human ingenuity, says Mark Williams, head of product at People First, …

Hairdressers

Mat Hunter, director of the Central Research Laboratory, a tech-focused co-working space and accelerator for tech startups, have seen startups trying to create all kinds of new technologies, which has given him insight into just what machines can and can’t pull off. It’s lead him to believe that jobs like the humble hairdresser are safer from automation than those of, says, accountancy.

Therapists and social workers

Another automation-proof career is likely to be one involved in helping people heal the mind, says Pickersgill. “People visit therapists because there is a need for emotional support and guidance. This can only be provided through real human interaction–by someone who can empathize and understand, and who can offer advice based on shared experiences, rather than just data-driven logic.”

Teachers

Teachers are so often the unsung heroes of our society. They are overworked and underpaid–yet charged with one of the most important tasks anyone can have: nurturing the growth of young people. The good news for teachers is that their jobs won’t be going anywhere.

Healthcare workers

Doctors and nurses will also likely never see their jobs taken by automation, says Williams. While automation will no doubt better enhance the treatments provided by doctors and nurses the fact of the matter is that robots aren’t going to outdo healthcare workers’ ability to connect with patients and make them feel understood the way a human can.

Caretakers

While humans might be fine with robots flipping their burgers and artificial intelligence managing their finances, being comfortable with a robot nannying your children or looking after your elderly mother is a much bigger ask. And that’s to say nothing of the fact that even today’s most advanced robots don’t have the physical dexterity to perform the movements and actions carers do every day.

Grothaus does offer a proviso in his conclusion: certain types of jobs are relatively safe until developers learn to replicate qualities such as empathy in robots/AI.

It’s very confusing

There’s so much news about robots, artificial intelligence, androids, and cyborgs that it’s hard to keep up with it let alone attempt to get a feeling for where all this might be headed. When you add the fact that the term robots/artificial inteligence are often used interchangeably and that the distinction between robots/androids/cyborgs is not always clear any attempts to peer into the future become even more challenging.

At this point I content myself with tracking the situation and finding definitions so I can better understand what I’m tracking. Carmen Wong’s August 23, 2018 posting on the Signals blog published by Canada’s Centre for Commercialization of Regenerative Medicine (CCRM) offers some useful definitions in the context of an article about the use of artificial intelligence in the life sciences, particularly in Canada (Note: Links have been removed),

Artificial intelligence (AI). Machine learning. To most people, these are just buzzwords and synonymous. Whether or not we fully understand what both are, they are slowly integrating into our everyday lives. Virtual assistants such as Siri? AI is at work. The personalized ads you see when you are browsing on the web or movie recommendations provided on Netflix? Thank AI for that too.

AI is defined as machines having intelligence that imitates human behaviour such as learning, planning and problem solving. A process used to achieve AI is called machine learning, where a computer uses lots of data to “train” or “teach” itself, without human intervention, to accomplish a pre-determined task. Essentially, the computer keeps on modifying its algorithm based on the information provided to get to the desired goal.

Another term you may have heard of is deep learning. Deep learning is a particular type of machine learning where algorithms are set up like the structure and function of human brains. It is similar to a network of brain cells interconnecting with each other.

Toronto has seen its fair share of media-worthy AI activity. The Government of Canada, Government of Ontario, industry and multiple universities came together in March 2018 to launch the Vector Institute, with the goal of using AI to promote economic growth and improve the lives of Canadians. In May, Samsung opened its AI Centre in the MaRS Discovery District, joining a network of Samsung centres located in California, United Kingdom and Russia.

There has been a boom in AI companies over the past few years, which span a variety of industries. This year’s ranking of the top 100 most promising private AI companies covers 25 fields with cybersecurity, enterprise and robotics being the hot focus areas.

Wong goes on to explore AI deployment in the life sciences and concludes that human scientists and doctors will still be needed although she does note this in closing (Note: A link has been removed),

More importantly, empathy and support from a fellow human being could never be fully replaced by a machine (could it?), but maybe this will change in the future. We will just have to wait and see.

Artificial empathy is the term used in Lisa Morgan’s April 25, 2018 article for Information Week which unfortunately does not include any links to actual projects or researchers working on artificial empathy. Instead, the article is focused on how business interests and marketers would like to see it employed. FWIW, I have found a few references: (1) Artificial empathy Wikipedia essay (look for the references at the end of the essay for more) and (2) this open access article: Towards Artificial Empathy; How Can Artificial Empathy Follow the Developmental Pathway of Natural Empathy? by Minoru Asada.

Please let me know in the comments if you should have an insights on the matter in the comments section of this blog.

Next Horizons: Electronic Literature Organization (ELO) 2016
 conference in Victoria, BC

The Electronic Literature Organization (ELO; based at the Massachusetts Institute of Technology [MIT]) is holding its annual conference themed Next Horizons (from an Oct. 12, 2015 post on the ELO blog) at the University of Victoria on Vancouver Island, British Columbia from June 10 – June 12, 2016.

You can get a better sense of what it’s all about by looking at the conference schedule/programme,

Friday, June 10, 2016

8:00 a.m.–5:00 p.m.: Registration
MacLaurin Lobby A100

8:00 a.m.-10:00 a.m: Breakfast
Sponsored by Bloomsbury Academic

10:00 a.m.-10:30: Welcome
MacLaurin David Lam Auditorium A 144
Speakers: Dene Grigar & Ray Siemens

10:30-12 noon: Featured Papers
MacLaurin David Lam Auditorium A 144
Chair: Alexandra Saum-Pascual, UC Berkeley

  • Stuart Moulthrop, “Intimate Mechanics: Play and Meaning in the Middle of Electronic Literature”
  • Anastasia Salter, “Code before Content? Brogrammer Culture in Games and Electronic Literature”

12 Noon-1:45 p.m.  Gallery Opening & Lunch Reception
MacLaurin Lobby A 100
Kick off event in celebration of e-lit works
A complete list of artists featured in the Exhibit

1:45-3:00: Keynote Session
MacLaurin David Lam Auditorium A 144
“Prototyping Resistance: Wargame Narrative and Inclusive Feminist Discourse”

  • Jon Saklofske, Acadia University
  • Anastasia Salter, University of Central Florida
  • Liz Losh, College of William and Mary
  • Diane Jakacki, Bucknell University
  • Stephanie Boluk, UC Davis

3:00-3:15: Break

3:15-4:45: Concurrent Session 1

Session 1.1: Best Practices for Archiving E-Lit
MacLaurin D010
Roundtable
Chair: Dene Grigar, Washington State University Vancouver

  • Dene Grigar, Washington State University Vancouver
  • Stuart Moulthrop, University of Wisconsin Milwaukee
  • Matthew Kirschenbaum, University of Maryland College Park
  • Judy Malloy, Independent Artist

Session 1.2: Medium & Meaning
MacLaurin D110
Chair: Rui Torres, University Fernando Pessoa

  • “From eLit to pLit,” Heiko Zimmerman, University of Trier
  • “Generations of Meaning,” Hannah Ackermans, Utrecht University
  • “Co-Designing DUST,” Kari Kraus, University of Maryland College Park

Session 1.3: A Critical Look at E-Lit
MacLaurin D105
Chair: Philippe Brand, Lewis & Clark College

  • “Methods of Interrogation,” John Murray, University of California Santa Cruz
  • “Peering through the Window,” Philippe Brand, Lewis & Clark College
  • “(E-)re-writing Well-Known Works,” Agnieszka Przybyszewska, University of Lodz

Session 1.4: Literary Games
MacLaurin D109
Chair: Alex Mitchell, National University of Singapore

  • “Twine Games,” Alanna Bartolini, UC Santa Barbara
  • “Whose Game Is It Anyway?,” Ryan House, Washington State University Vancouver
  • “Micronarratives Dynamics in the Structure of an Open-World Action-Adventure Game,” Natalie Funk, Simon Fraser University

Session 1.5: eLit and the (Next) Future of Cinema
MacLaurin D107
Roundtable
Chair: Steven Wingate, South Dakota State University

  • Steve Wingate, South Dakota State University
  • Kate Armstrong, Emily Carr University
  • Samantha Gorman, USC

Session 1.6: Authors & Texts
MacLaurin D101
Chair: Robert Glick, Rochester Institute of Technology

  • “Generative Poems by Maria Mencia,” Angelica Huizar, Old Dominion University
  • “Inhabitation: Johanna Drucker: “no file is ever self-identical,” Joel Kateinikoff, University of Alberta
  • “The Great Monster: Ulises Carrión as E-Lit Theorist,” Élika Ortega, University of Kansas
  • “Pedagogic Strategies for Electronic Literature,” Mia Zamora, Kean University

3:15-4:45: Action Session Day 1
MacLaurin D111

  • Digital Preservation, by Nicholas Schiller, Washington State University Vancouver; Zach Coble, NYU
  • ELMCIP, Scott Rettberg and Álvaro Seiça, University of Bergen; Hannah Ackermans, Utrecht University
  • Wikipedia-A-Thon, Liz Losh, College of William and Mary

5:00-6:00: Reception and Poster Session
University of Victoria Faculty Club
For ELO, DHSI, & INKE Participants, featuring these artists and scholars from the ELO:

  • “Social Media for E-Lit Authors,” Michael Rabby, Washington State University Vancouver
  • “– O True Apothecary!, by Kyle Booten,” UC Berkeley, Center for New Media
  • “Life Experience through Digital Simulation Narratives,” David Núñez Ruiz, Neotipo
  • “Building Stories,” Kate Palermini, Washington State University Vancouver
  • “Help Wanted and Skills Offered,” by Deena Larsen, Independent Artist; Julianne Chatelain, U.S. Bureau of Reclamation
  • “Beyond Original E-Lit: Deconstructing Austen Cybertexts,” Meredith Dabek, Maynooth University
  • Arabic E-Lit. (AEL) Project, Riham Hosny, Rochester Institute of Technology/Minia University
  • “Poetic Machines,” Sidse Rubens LeFevre, University of Copenhagen
  • “Meta for Meta’s Sake,” Melinda White

 

7:30-11:00: Readings & Performances at Felicita’s
A complete list of artists featured in the event

Saturday, June 11, 2016

 

8:30-10:00: Lightning Round
MacLaurin David Lam Auditorium A 144
Chair: James O’Sullivan, University of Sheffield

  • “Different Tools but Similar Wits,” Guangxu Zhao, University of Ottawa
  • “Digital Aesthetics,” Bertrand Gervais, Université du Québec à Montréal
  • “Hatsune Miku,” Roman Kalinovski, Independent Scholar
  • “Meta for Meta’s Sake,” Melinda White, University of New Hampshire
  • “Narrative Texture,” Luciane Maria Fadel, Simon Fraser University
  • “Natural Language Generation,” by Stefan Muller Arisona
  • “Poetic Machines,” Sidse Rubens LeFevre, University of Copenhagen
  • “Really Really Long Works,” Aden Evens, Dartmouth University
  • “UnWrapping the E-Reader,” David Roh, University of Utah
  • “Social Media for E-Lit Artists,” Michael Rabby

10:00: Gallery exhibit opens
MacLaurin A100
A complete list of artists featured in the Exhibit

10:30-12 noon: Concurrent Session 2

Session 2.1: Literary Interventions
MacLaurin D101
Brian Ganter, Capilano College

  • “Glitching the Poem,” Aaron Angello, University of Colorado Boulder
  • “WALLPAPER,” Alice Bell, Sheffield Hallam University; Astrid Ensslin, University of Alberta
  • “Unprintable Books,” Kate Pullinger [emphasis mine], Bath Spa University

Session 2.2: Theoretical Underpinnings
MacLaurin D105
Chair: Mia Zamora, Kean University

  • “Transmediation,” Kedrick James, University of British Columbia; Ernesto Pena, University of British Columbia
  • “The Closed World, Databased Narrative, and Network Effect,” Mark Sample, Davidson College
  • “The Cyborg of the House,” Maria Goicoechea, Universidad Complutense de Madrid

Session 2.3: E-Lit in Time and Space
MacLaurin D107
Chair: Andrew Klobucar, New Jersey Institute of Technology

  • “Electronic Literary Artifacts,” John Barber, Washington State University Vancouver; Alcina Cortez, INET-MD, Instituto de Etnomusicologia, Música e Dança
  • “The Old in the Arms of the New,” Gary Barwin, Independent Scholar
  • “Space as a Meaningful Dimension,” Luciane Maria Fadel, Simon Fraser University

Session 2.4: Understanding Bots
MacLaurin D110
Roundtable
Chair: Leonardo Flores, University of Puerto Rico, Mayagüez

  • Allison Parrish, Fordham University
  • Matt Schneider, University of Toronto
  • Tobi Hahn, Paisley Games
  • Zach Whalen, University of Mary Washington

10:30-12 noon: Action Session Day 2
MacLaurin D111

  • Digital Preservation, by Nicholas Schiller, Washington State University Vancouver; Zach Coble, NYU
  • ELMCIP, Allison Parrish, Fordham University; Scott Rettberg, University of Bergen; David Nunez Ruiz, Neotipo; Hannah Ackermans, Utrecht University
  • Wikipedia-A-Thon, Liz Losh, College of William and Mary

12:15-1:15: Artists Talks & Lunch
David Lam Auditorium MacLaurin A144

  • “The Listeners,” by John Cayley
  • “The ChessBard and 3D Poetry Project as Translational Ecosystems,” Aaron Tucker, Ryerson University
  • “News Wheel,” Jody Zellen, Independent Artist
  • “x-o-x-o-x.com,” Erik Zepka, Independent Artist

1:30-3:00: Concurrent Session 3

Session 3.1: E-Lit Pedagogy in Global Setting
MacLaurin D111
Roundtable
Co-Chairs: Philippe Bootz, Université Paris 8; Riham Hosny, Rochester Institute of Technology/Minia University

  • Sandy Baldwin, Rochester Institute of Technology
  • Maria Goicoechea, Universidad Complutense de Madrid
  • Odile Farge, UNESCO Chair ITEN, Foundation MSH/University of Paris8.

Session 3.2: The Art of Computational Media
MacLaurin D109
Chair: Rui Torres, University Fernando Pessoa

  • “Creative GREP Works,” Kristopher Purzycki, University of Wisconsin Milwaukee
  • “Using Theme to Author Hypertext Fiction,” Alex Mitchell, National University at Singapore

Session 3.3: Present Future Past
MacLaurin D110
Chair: David Roh, University of Utah

  • “Exploring Potentiality,” Daniela Côrtes Maduro, Universität Bremen
  • “Programming the Kafkaesque Mechanism,” by Kristof Anetta, Slovak Academy of Sciences
  • “Reapprasing Word Processing,” Matthew Kirschenbaum, University of Maryland College Park

Session 3.4: Beyond Collaborative Horizons
MacLaurin D010
Panel
Chair: Jeremy Douglass, UC Santa Barbara

  • Jeremy Douglass, UC Santa Barbara
  • Mark Marino, USC
  • Jessica Pressman, San Diego State University

Session 3.5: E-Loops: Reshuffling Reading & Writing In Electronic Literature Works
MacLaurin D105
Panel
Chair: Gwen Le Cor, Université Paris 8

  • “The Plastic Space of E-loops and Loopholes: the Figural Dynamics of Reading,” Gwen Le Cor, Université Paris 8
  • “Beyond the Cybernetic Loop: Redrawing the Boundaries of E-Lit Translation,” Arnaud Regnauld, Université Paris 8
  • “E-Loops: The Possible and Variable Figure of a Contemporary Aesthetic,” Ariane Savoie, Université du Québec à Montréal and Université Catholique de Louvain
  • “Relocating the Digital,” Stéphane Vanderhaeghe, Université Paris 8

Session 3.6: Metaphorical Perspectives
MacLaurin D107
Chair: Alexandra Saum-Pascual, UC Berkeley

  • “Street Ghosts,” Ali Rachel Pearl, USC
  • “The (Wo)men’s Social Club,” Amber Strother, Washington State University Vancouver
Session 3.7: Embracing Bots
MacLaurin D101

Roundtable
Zach Whalen, Chair

  • Leonardo Flores, University of Puerto Rico Mayagüez Campus
  • Chris Rodley, University of Sydney
  • Élika Ortega, University of Kansas
  • Katie Rose Pipkin, Carnegie Mellon

1:30-3:30: Workshops
MacLaurin D115

  • “Bots,” Zach Whalen, University of Mary Washington
  • “Twine”
  • “AR/VR,” John Murray, UC Santa Cruz
  • “Unity 3D,” Stefan Muller Arisona, University of Applied Sciences and Arts Northwestern; Simon Schubiger, University of Applied Sciences and Arts Northwestern
  • “Exploratory Programming,” Nick Montfort, MIT
  • “Scalar,” Hannah Ackermans, University of Utrecht
  • The Electronic Poet’s Workbench: Build a Generative Writing Practice, Andrew Koblucar, New Jersey Institute of Technology; David Ayre, Programmer and Independent Artist

3:30-5:00: Keynote

Christine Wilks [emphasis mine], “Interactive Narrative and the Art of Steering Through Possible Worlds”
MacLaurin David Lam Auditorium A144

Wilks is British digital writer, artist and developer of playable stories. Her digital fiction, Underbelly, won the New Media Writing Prize 2010 and the MaMSIE Digital Media Competition 2011. Her work is published in online journals and anthologies, including the Electronic Literature Collection, Volume 2 and the ELMCIP Anthology of European Electronic Literature, and has been presented at international festivals, exhibitions and conferences. She is currently doing a practice-based PhD in Digital Writing at Bath Spa University and is also Creative Director of e-learning specialists, Make It Happen.

5:15-6:45: Screenings at Cinecenta
A complete list of artists featured in the Screenings

7:00-9:00: Banquet (a dance follows)
University of Victoria Faculty Club

Sunday, June 12, 2016

 

8:30-10:00: Town Hall
MacLaurin David Lam Auditorium D144

10:00: Gallery exhibit opens
MacLaurin A100
A complete list of artists featured in the Exhibit

10:30-12 p.m.: Concurrent Session 4

Session 4.1: Narratives & Narrativity
MacLaurin D110
Chair: Kendrick James, University of British Columbia

  • “Narrativity in Virtual Reality,” Illya Szilak, Independent Scholar
  • “Simulation Studies,” David Ciccoricco, University of Otago
  • “Future Fiction Storytelling Machines,” Caitlin Fisher, York University

Session 4.2: Historical & Critical Perspectives
MacLaurin D101
Chair: Robert Glick, Rochester Institute of Technology

  • “The Evolution of E-Lit,” James O’Sullivan, University of Sheffield
  • “The Logic of Selection,” by Matti Kangaskoski, Helsinki University

Session 4.3: Emergent Media
MacLaurin D107
Alexandra Saum-Pascual, UC Berkeley

  • Seasons II:  a case study in Ambient Video, Generative Art, and Audiovisual Experience,” Jim Bizzocchi, Simon Fraser University; Arne Eigenfeldt, Simon Fraser University; Philippe Pasquier, Simon Fraser University; Miles Thorogood, Simon Fraser University
  • “Cinematic Turns,” Liz Losh, College of William and Mary
  • “Mario Mods and Ludic Seriality,” Shane Denson, Duke University

Session 4.4: The E-Literary Object
MacLaurin D109
Chair: Deena Larsen, Independent Artist

  • “How E-Literary Is My E-Literature?,” by Leonardo Flores, University of Puerto Rico Mayagüez Campus
  • “Overcoming the Locative Interface Fallacy,” by Lauren Burr, University of Waterloo
  • “Interactive Narratives on the Block,” Aynur Kadir, Simon Fraser University

Session 4.5: Next Narrative
MacLaurin D010
Panel
Chair: Marjorie Luesebrink

  • Marjorie Luesebrink, Independent Artist
  • Daniel Punday, Independent Artist
  • Will Luers, Washington State University Vancouver

10:30-12 p.m.: Action Session Day 3
MacLaurin D111

  • Digital Preservation, by Nicholas Schiller, Washington State University Vancouver; Zach Coble, NYU
  • ELMCIP, Allison Parrish, Fordham University; Scott Rettberg, University of Bergen; David Nunez Ruiz, Neotipo; Hannah Ackermans, Utrecht University
  • Wikipedia-A-Thon, Liz Losh, College of William and Mary

12:15-1:30: Artists Talks & Lunch
David Lam Auditorium A144

  • “Just for the Cameras,” Flourish Klink, Independent Artist
  • “Lulu Sweet,” Deanne Achong and Faith Moosang, Independent Artists
  • “Drone Pilot,” Ian Hatcher, Independent Artist
  • “AVATAR/MOCAP,” Alan Sondheim, Independent Artist

1:30-3:00 : Concurrent Session 5

Session 5.1: Subversive Texts
MacLaurin D101
Chair: Michael Rabby, Washington State University Vancouver

  • “E-Lit Jazz,” Sandy Baldwin, Rochester Institute of Technology; Rui Torres, University Fernando Pessoa
  • “Pop Subversion in Electronic Literature,” Davin Heckman, Winona State University
  • “E-Lit in Arabic Universities,” Riham Hosny, Rochester Institute of Technology/Minia University

Session 5.2: Experiments in #NetProv & Participatory Narratives
MacLaurin D109
Roundtable
Chair: Mia Zamora, Kean University

  • Mark Marino, USC
  • Rob Wittig, Meanwhile… Netprov Studio
  • Mia Zamora, Kean University

Session 5.3: Emergent Media
MacLaurin D105
Chair: Andrew Klobucar, New Jersey Institute of Technology

  • “Migrating Electronic Literature to the Kinect System,” Monika Gorska-Olesinka, University of Opole
  • “Mobile and Tactile Screens as Venues for the Performing Arts?,” Serge Bouchardon, Sorbonne Universités, Université de Technologie de Compiègne
  • “The Unquantified Self: Imagining Ethopoiesis in the Cognitive Era,” Andrew Klobucar, New Jersey Institute of Technology

Session 5.4: E-Lit Labs
MacLaurin D010
Chair: Jim Brown, Rutgers University Camden

  • Jim Brown, Rutgers University Camden
  • Robert Emmons, Rutgers University Camden
  • Brian Greenspan, Carleton University
  • Stephanie Boluk, UC Davis
  • Patrick LeMieux, UC Davis

Session 5.5: Transmedia Publishing
MacLaurin D107
Roundtable
Chair: Philippe Bootz

  • Philippe Bootz, Université Paris 8
  • Lucile Haute, Université Paris 8
  • Nolwenn Trehondart, Université Paris 8
  • Steve Wingate, South Dakota State University

Session 5.6: Feminist Horizons
MacLaurin D110
Panel
Moderator: Anastasia Salter, University of Central Florida

  • Kathi Inman Berens, Portland State University
  • Jessica Pressman, San Diego State University
  • Caitlin Fisher, York University

3:30-5:00: Closing Session
David Lam Auditorium MacLaurin A144
Chairs: John Cayley, Brown University; Dene Grigar, President, ELO

  • “Platforms and Genres of Electronic Literature,” Scott Rettberg, University of Bergen
  • “Emergent Story Structures,” David Meurer. York University
  • “We Must Go Deeper,” Samantha Gorman, USC; Milan Koerner-Safrata, Recon Instruments

I’ve bolded two names: Christine Wilks, one of two conference keynote speakers, who completed her MA in the same cohort as mine in De Montfort University’s Creative Writing and New Media master’s program. Congratulations on being a keynote speaker, Christine! The other name belongs to Kate Pullinger who was one of two readers for that same MA programme. Since those days, Pullinger has won a Governor General’s award for her fiction, “The Mistress of Nothing,” and become a professor at the University of Bath Spa (UK).

Registration appears to be open.