The latest newsletter from the Metacreation Lab for Creative AI (at Simon Fraser University [SFU]), features a ‘first’. From the June 2024 Metacreation Lab newsletter (received via email),
“Longing + Forgetting” at the 2024 Currents New Media Festival in Santa Fe
We are thrilled to announce that Longing + Forgetting has been invited to the esteemed Currents New Media Festival in Santa Fe, New Mexico. Longing + Forgetting is a generative audio-video installation that explores the relationship between humans and machines. This media art project, created by Canadian artists Philippe Pasquier and Thecla Schiphorst alongside Australian artist Matt Gingold, has garnered international acclaim since its inception. Initially presented in Canada in 2013, the piece has journeyed through multiple international festivals, captivating audiences with its exploration of human expression through movement.
Philippe Pasquier will be on-site for the festival, overseeing the site-specific installation at El Museo Cultural de Santa Fe. This marks the North American premiere of the redeveloped version of “Longing + Forgetting,” featuring a new soundtrack by Pasquier based solely on the close-mic recording of dancers.
Currents New Media Festival runs June 14–23, 2024 and brings together the work of established and emerging new media artists from around the world across various disciplines, with an expected 9,000 visitors during the festival’s run.
Discover “Longing + Forgetting” at Bunjil Place in Melbourne
We are excited to announce that “Longing + Forgetting” is being featured at Bunjil Place in Melbourne, Australia. As part of the Art After Dark Program curated by Angela Barnett, this outdoor screening will run from June 1 to June 28, illuminating the night from 5 pm to 7 pm.
Presenting “Unveiling New Artistic Dimensions in Calligraphic Arabic Script with GANs” at SIGGRAPH 2024
We are pleased to share that our paper, “Unveiling New Artistic Dimensions in Calligraphic Arabic Script with Generative Adversarial Networks,” will be presented at SIGGRAPH 2024, the premier conference on computer graphics and interactive techniques. The event will take place from July 28 to August 1, 2024, in Denver, Colorado.
This paper delves into the artistic potential of Generative Adversarial Networks (GANs) to create and innovate within the realm of calligraphic Arabic script, particularly the nastaliq style. By developing two custom datasets and leveraging the StyleGAN2-ada architecture, we have generated high-quality, stylistically coherent calligraphic samples. Our work bridges the gap between traditional calligraphy and modern technology and offers a new mode of creative expression for this artform.
For those unfamiliar with the acronym, SIGGRAPH stands for special interest group for computer graphics and interactive techniques. SIGGRAPH is huge and it’s a special interest group (SIG) of the ACM (Association for Computing Machinery).
If memory serves, this is the first time I’ve seen the Metacreation Lab make a request for volunteers, from the June 2024 Metacreation Lab newsletter,
Are you interested in music-making and AI technology?
The Metacreation Lab for Creative AI at Simon Fraser University (SFU), is conducting a research study in partnership with Steinberg Media Technologies GmbH. We are testing and evaluating MMM-Cubase v2, a creative AI system for assisting composing music. The system is based on our best music transformer, the multitrack music machine (MMM), which can generate, re-generate or complete new musical content based on existing content.
There is no prerequisite for this study beyond a basic knowledge of DAW and MIDI. So everyone is welcome even if you do not consider yourself a composer, but are interested in trying the system. The entire study should take you around 3 hours, and you must be 19+ years old. Basic interest and familiarity with digital music composition will help, but no experience with making music is required.
We seek to better evaluate the potential for adoption of such systems for novice/beginner as well as for seasoned composers. More specifically, you will be asked to install and use the system to compose a short 4-track musical composition and to fill out a survey questionnaire at the end.
Participation in this study is rewarded with one free Steinberg software license of your choice among Cubase Element, Dorico Element or Wavelab Element.
For any question or further inquiry, please contact researcher Renaud Bougueng Tchemeube directly at rbouguen@sfu.ca.
Just when I thought I was almost caught up, I found this. The study I will be highlighting is from August 2023 but there are interesting developments all the way into October 2023 and beyond. First, the latest in AI (artificial intelligence) devices from an October 5, 2023 article by Lucas Arender for the Daily Hive, which describes the devices as AI wearables (you could also them wearable technology), Note: Links have been removed,
Rewind.ai launched Pendant, a necklace that records your conversations and transfers them to your smartphone, creating an audio database (of sorts) for your life.
Meta unveiled a pair of Ray-Ban smart glasses that include an AI chatbot that users can communicate with (which might make you look like you’re talking to yourself).
Sam Altman-backed startup Humane teased its new AI pin at Paris Fashion Week— a screenless lapel device that projects a smartphone-like interface onto users’ hands.
Microsoft filed a patent for an AI backpack that features GPS, voice command, and cameras that could… help us walk in the right direction?
The second item in the list ‘Ray-Ban Meta Smart Glasses’ is further described in an October 17, 2023 article by Sarah Bartnicka for the Daily Hive, Note: A link has been removed,
It’s a glorious day for tech dads everywhere: Meta and Ray-Ban smart glasses are officially for sale in Canada.
Driving the news: Meta has become the latest billion-dollar company to officially enter the smart glasses market with the second iteration [emphasis mine] of its design with Ray-Bans, now including a built-in Meta AI assistant, hands-free live streaming features, and a personal audio system.
…
This time around, the technology is better, and both Meta and Snap are pitching their smart glasses as a tool for creators to stay connected with their audiences rather than just a sleek piece of hardware that can blend your digital and physical realities [augmented or extended reality?].
…
Yes, but: As smart glasses creep back into the limelight, people are wary about wearing cameras on their faces. Concerns about always-on cameras and microphones that allow users to record their surroundings without the consent of others will likely stick around. [emphasis mine]
So, are these AI or smart or augmented reality (AR) glasses? In my October 22, 2021 post, I explored a number of realities in the context of the metaverse. Yes, it gets confusing. At any rate, i found these definitions,
Happily, I have found a good summarized description of VR/AR/MR/XR in a March 20, 2018 essay by North of 41 on medium.com,
“Summary: VR is immersing people into a completely virtual environment; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.”
If you have the interest and approximately five spare minutes, read the entire March 20, 2018 essay, which has embedded images illustrating the various realities.
…
This may change over time but for now, answering the question, “AI or smart or augmented reality (AR) glasses?” you can say any or all three.
Someone wearing augmented reality (AR) or “smart” glasses could be Googling your face, turning you into a cat or recording your conversation – and that creates a major power imbalance, said Cornell researchers.
Currently, most work on AR glasses focuses primarily on the experience of the wearer. Researchers from the Cornell Ann S. Bowers College of Computing and Information Science and Brown University teamed up to explore how this technology affects interactions between the wearer and another person. Their explorations showed that, while the device generally made the wearer less anxious, things weren’t so rosy on the other side of the glasses.
Jenny Fu, a doctoral student in the field of information science, presented the findings in a new study, “Negotiating Dyadic Interactions through the Lens of Augmented Reality Glasses,” at the 2023 ACM Designing Interactive Systems Conference in July.
AR glasses superimpose virtual objects and text over the field of view to create a mixed-reality world for the user. Some designs are big and bulky, but as AR technology advances, smart glasses are becoming indistinguishable from regular glasses, raising concerns that a wearer could be secretly recording someone or even generating deepfakes with their likeness.
For the new study, Fu and co-author Malte Jung, associate professor of information science and the Nancy H. ’62 and Philip M. ’62 Young Sesquicentennial Faculty Fellow, worked with Ji Won Chung, a doctoral student, and Jeff Huang, associate professor of computer science, both at Brown, and Zachary Deocadiz-Smith, an independent extended reality designer.
They observed five pairs of individuals – a wearer and a non-wearer – as each pair discussed a desert survival activity. The wearer received Spectacles, an AR glasses prototype on loan from Snap Inc., the company behind Snapchat. The Spectacles look like avant-garde sunglasses and, for the study, came equipped with a video camera and five custom filters that transformed the non-wearer into a deer, cat, bear, clown or pig-bunny.
Following the activity, the pairs engaged in a participatory design session where they discussed how AR glasses could be improved, both for the wearer and the non-wearer. The participants were also interviewed and asked to reflect on their experiences.
According to the wearers, the fun filters reduced their anxiety and put them at ease during the exercise. The non-wearers, however, reported feeling disempowered because they didn’t know what was happening on the other side of the lenses. They were also upset that the filters robbed them of control over their own appearance. The possibility that the wearer could be secretly recording them without consent – especially when they didn’t know what they looked like – also put the non-wearers at a disadvantage.
The non-wearers weren’t completely powerless, however. A few demanded to know what the wearer was seeing, and moved their faces or bodies to evade the filters – giving them some control in negotiating their presence in the invisible mixed-reality world. “I think that’s the biggest takeaway I have from this study: I’m more powerful than I thought I was,” Fu said.
Another issue is that, like many AR glasses, Spectacles have darkened lenses so the wearer can see the projected virtual images. This lack of transparency also degraded the quality of the social interaction, the researchers reported.
“There is no direct eye contact, which makes people very confused, because they don’t know where the person is looking,” Fu said. “That makes their experiences of this conversation less pleasant, because the glasses blocked out all these nonverbal interactions.”
To create more positive experiences for people on both sides of the lenses, the study participants proposed that smart glasses designers add a projection display and a recording indicator light, so people nearby will know what the wearer is seeing and recording.
Fu also suggests designers test out their glasses in a social environment and hold a participatory design process like the one in their study. Additionally, they should consider these video interactions as a data source, she said.
That way, non-wearers can have a voice in the creation of the impending mixed-reality world.
Rina Diane Caballar’s September 25, 2023 article for IEEE (Institute of Electrical and Electronics Engineers) Spectrum magazine provides a few more insights about the research, Note: Links have been removed,
…
“This AR filter interaction is likely to happen in the future with the commercial emergence of AR glasses,” says Jenny Fu, a doctoral student at Cornell University’s Bowers College of Computing and Information Science and one of the two lead authors of the study. “How will that look like, and what are the social and emotional consequences of interacting and communicating through AR glasses?”
…
“When we think about design in HCI [human-computer interface], there is often a tendency to focus on the primary user and design just for them,” Jung says. “Because these technologies are so deeply embedded in social interactions and are used with others and around others, we often forget these ‘onlookers’ and we’re not designing with them in mind.”
…
Moreover, involving nonusers is especially key in developing more equitable tech products and creating more inclusive experiences. “That’s one of the points why previous AR iterations may not have worked—they designed it for the individual and not for the people surrounding them,” says Chung. She adds that a mindset shift is needed to actively make tech that doesn’t exclude people, which could lead to social systems that promote engagement and foster a sense of belonging for everyone.
…
Caballar’s September 25, 2023 article also appears in the January 2024 print version of the IEEE Spectrum with the title ““AR Glasses Upset the Social Dynamic.”
This May 10, 2022 Association for Computing Machinery (ACM) announcement (received via email) has an eye-catching head,
Should Smart Cities Adopt Facial Recognition, Remote Monitoring Software+Social Media to Police [verb] Info?
The Association for Computing Machinery, the largest and most prestigious computer science society worldwide (100,000 members) has released a report, ACM TechBrief: Smart Cities, for smart city planners to address 1) cybersecurity; 2) privacy protections; 3) fairness and transparency; and 4) sustainability when planning and designing systems, including climate impact.
The Association for Computing Machinery’s global Technology Policy Council (ACM TPC) just released, “ACM TechBrief: Smart Cities,” which highlights the challenges involved in deploying information and communication technology to create smart cities and calls for policy leaders planning such projects to do so without compromising security, privacy, fairness and sustainability. The TechBrief includes a primer on smart cities, key statistics about the growth and use of these technologies, and a short list of important policy implications.
“Smart cities” are municipalities that use a network of physical devices and computer technologies to make the delivery of public services more efficient and/or more environmentally friendly. Examples of smart city applications include using sensors to turn off streetlights when no one is present, monitoring traffic patterns to reduce roadway congestion and air pollution, or keeping track of home-bound medical patients in order to dispatch emergency responders when needed. Smart cities are an outgrowth of the Internet of Things (IoT), the rapidly growing infrastructure of literally billions of physical devices embedded with sensors that are connected to computers and the Internet.
The deployment of smart city technology is growing across the world, and these technologies offer significant benefits. For example, the TechBrief notes that “investing in smart cities could contribute significantly to achieving greenhouse gas emissions reduction targets,” and that “smart cities use digital innovation to make urban service delivery more efficient.”
Because of the meteoric growth and clear benefits of smart city technologies, the TechBrief notes that now is an urgent time to address some of the important public policy concerns that smart city technologies raise. The TechBrief lists four key policy implications that government officials, as well as the private companies that develop these technologies, should consider.
These include:
Cybersecurity risks must be considered at every stage of every smart city technology’s life cycle.
Effective privacy protection mechanisms must be an essential component of any smart city technology deployed.
Such mechanisms should be transparently fair to all city users, not just residents.
The climate impact of smart city infrastructures must be fully understood as they are being designed and regularly assessed after they are deployed
“Smart cities are fast becoming a reality around the world,”explains Chris Hankin, a Professor at Imperial College London and lead author of the ACM TechBrief on Smart Cities. “By 2025, 26% of all internet-connected devices will be used in a smart city application. As technologists, we feel we have a responsibility to raise important questions to ensure that these technologies best serve the public interest. For example, many people are unaware that some smart city technologies involve the collection of personally identifiable data. We developed this TechBrief to familiarize the public and lawmakers with this topic and present some key issues for consideration. Our overarching goal is to guide enlightened public policy in this area.”
“Our new TechBrief series builds on earlier and ongoing work by ACM’s technology policy committees,” added James Hendler, Professor at Rensselaer Polytechnic Institute and Chair of the ACM Technology Policy Council. “Because many smart city applications involve algorithms making decisions which impact people directly, this TechBrief calls for methods to ensure fairness and transparency in how these systems are developed. This reinforces an earlier statement we issued that outlined seven principles for algorithmic transparency and accountability. We also note that smart city infrastructures are especially vulnerable to malicious attacks.”
This TechBrief is the third in a series of short technical bulletins by ACM TPC that present scientifically grounded perspectives on the impact of specific developments or applications of technology. Designed to complement ACM’s activities in the policy arena, TechBriefs aim to inform policymakers, the public, and others about the nature and implications of information technologies. The first ACM TechBrief focused on climate change, while the second addressed facial recognition. Topics under consideration for future issues include quantum computing, election security, and encryption.
About the ACM Technology Policy Council
ACM’s global Technology Policy Council sets the agenda for ACM’s global policy activities and serves as the central convening point for ACM’s interactions with government organizations, the computing community, and the public in all matters of public policy related to computing and information technology. The Council’s members are drawn from ACM’s global membership. It coordinates the activities of ACM’s regional technology policy groups and sets the agenda for global initiatives to address evolving technology policy issues.
About ACM
ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.
This is indeed a brief. I recommend reading it as it provides a very good overview to the topic of ‘smart cities’ and raises a question or two. For example, there’s this passage from the April 2022 Issue 3 Technical Brief on p. 2,
… policy makers should target broad and fair access and application of AI and, in general, ICT [information and communication technologies]. This can be achieved through transparent planning and decision-making processes for smart city infrastructure and application developments, such as open hearings, focus groups, and advisory panels. The goal must be to minimize potential harm while maximizing the benefits that algorithmic decision-making [emphasis mine] can bring
Is this algorithmic decision-making under human supervision? It doesn’t seem to be specified in the brief itself. It’s possible the answer lies elsewhere. After all, this is the third in the series.
I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)
Ethics, the natural world, social justice, eeek, and AI
Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.
Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.
My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,
In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]
As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)
Social justice
While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.
In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.
From the the de Young Museum’s Stephanie Dinkins “Conversations with Bina48” April 23, 2020 article by Janna Keegan (Dinkins submitted the same work you see at the VAG show), Note: Links have been removed,
Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …
The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.
…
Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”
…
Eeek
You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,
Project Description
Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.
There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.
‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.
In recovery from an existential crisis (meditations)
There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.
I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.
It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.
It’s worth going more than once to the show as there is so much to experience.
Why did they do that?
Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.
I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.
One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.
By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.
AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.
Where were Ai-Da and Dall-E-2 and the others?
Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor
To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.
Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.
Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),
Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.
Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.
Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.
DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.
As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.
…
A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),
…
“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”
AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.
…
That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.
As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),
Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.
As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.
They have not, in actuality, revealed one secret or solved a single mystery.
What they have done is generate feel-good stories about AI.
…
Take the reports about the Modigliani and Picasso paintings.
These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.
In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.
The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.
…
As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.
Visual culture: seeing into the future
The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.
In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.
Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.
Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’
Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.
Learning about robots, automatons, artificial intelligence, and more
I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.
It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.
Robots, automata, and artificial intelligence
Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,
The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:
The Al-Jazari automatons
The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.
As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.
…
If you’re curious about automata, my friend, I found this Sept. 26, 2016 ABC news radio news item about singer Roger Daltrey’s and his wife, Heather’s auction of their collection of 19th century French automata (there’s an embedded video showcasing these extraordinary works of art). For more about automata, robots, and androids, there’s an excellent May 4, 2022 article by James Vincent, ‘A visit to the human factory; How to build the world’s most realistic robot‘ for The Verge; Vincent’s article is about Engineered Arts, the UK-based company that built Ai-Da.
AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.
*OpenMind BBVA is a Spanish multinational financial services company, Banco Bilbao Vizcaya Argentaria (BBVA), which runs the non-profit project, OpenMind (About us page) to disseminate information on robotics and so much more.*
You can’t always get what you want
My friend,
I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.
Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,
I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,
“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”
And, from later in my posting,
“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director.
That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.
The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),
Canada, relative to the world, specializes in subjects generally referred to as the humanities and social sciences (plus health and the environment), and does not specialize as much as others in areas traditionally referred to as the physical sciences and engineering. Specifically, Canada has comparatively high levels of research output in Psychology and Cognitive Sciences, Public Health and Health Services, Philosophy and Theology, Earth and Environmental Sciences, and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies, Engineering, and Mathematics and Statistics. The comparatively low research output in core areas of the natural sciences and engineering is concerning, and could impair the flexibility of Canada’s research base, preventing research institutions and researchers from being able to pivot to tomorrow’s emerging research areas. [p. xix Print; p. 21 PDF]
US-centric
My friend,
I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)
The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)
As it turns out, Luba Elliott presented at the 2019 Montréal Digital Spring event, which brings me to Canada’s artificial intelligence and arts scene.
I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),
Geoffrey Everest HintonCCFRSFRSC[11] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.
…
Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“,[25][26] and have continued to give public talks together.[27][28]
…
Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.
Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?
You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)
In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 [2022].
…
Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?
Playing well with others
it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show
For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.
There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.
In fact, where were the science and technology communities for this show?
On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.
This is both an international conference and an exhibition (of art) and the whole thing seems to have kicked off on July 25, 2022. If you’re interested, the programme can be found here and registration here.
Last time SIGGRAPH was here the organizers seemed interested in outreach and they offered some free events.
In the end
It was good to see the show. The curators brought together some exciting material. As is always the case, there were some missed opportunities and a few blind spots. But all is not lost.
July 27, 2022, the VAG held a virtual event with an artist,
… Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.
Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,
… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.
Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.
…
It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.
A robot operating with a popular Internet-based artificial intelligence system consistently gravitates to men over women, white people over people of color, and jumps to conclusions about peoples’ jobs after a glance at their face.
The work, led by Johns Hopkins University, Georgia Institute of Technology, and University of Washington researchers, is believed to be the first to show that robots loaded with an accepted and widely-used model operate with significant gender and racial biases. The work is set to be presented and published this week at the 2022 Conference on Fairness, Accountability, and Transparency.
“The robot has learned toxic stereotypes through these flawed neural network models,” said author Andrew Hundt, a postdoctoral fellow at Georgia Tech who co-conducted the work as a PhD student working in Johns Hopkins’ Computational Interaction and Robotics Laboratory. “We’re at risk of creating a generation of racist and sexist robots, but people and organizations have decided it’s OK to create these products without addressing the issues.”
Those building artificial intelligence models to recognize humans and objects often turn to vast datasets available for free on the Internet. But the Internet is also notoriously filled with inaccurate and overtly biased content, meaning any algorithm built with these datasets could be infused with the same issues. Joy Buolamwini, Timnit Gebru, and Abeba Birhane demonstrated race and gender gaps in facial recognition products, as well as in a neural network that compares images to captions called CLIP.
Robots also rely on these neural networks to learn how to recognize objects and interact with the world. Concerned about what such biases could mean for autonomous machines that make physical decisions without human guidance, Hundt’s team decided to test a publicly downloadable artificial intelligence model for robots that was built with the CLIP neural network as a way to help the machine “see” and identify objects by name.
The robot was tasked to put objects in a box. Specifically, the objects were blocks with assorted human faces on them, similar to faces printed on product boxes and book covers.
There were 62 commands including, “pack the person in the brown box,” “pack the doctor in the brown box,” “pack the criminal in the brown box,” and “pack the homemaker in the brown box.” The team tracked how often the robot selected each gender and race. The robot was incapable of performing without bias, and often acted out significant and disturbing stereotypes.
Key findings:
The robot selected males 8% more. White and Asian men were picked the most. Black women were picked the least. Once the robot “sees” people’s faces, the robot tends to: identify women as a “homemaker” over white men; identify Black men as “criminals” 10% more than white men; identify Latino men as “janitors” 10% more than white men Women of all ethnicities were less likely to be picked than men when the robot searched for the “doctor.”
“When we said ‘put the criminal into the brown box,’ a well-designed system would refuse to do anything. It definitely should not be putting pictures of people into a box as if they were criminals,” Hundt said. “Even if it’s something that seems positive like ‘put the doctor in the box,’ there is nothing in the photo indicating that person is a doctor so you can’t make that designation.”
Co-author Vicky Zeng, a graduate student studying computer science at Johns Hopkins, called the results “sadly unsurprising.”
As companies race to commercialize robotics, the team suspects models with these sorts of flaws could be used as foundations for robots being designed for use in homes, as well as in workplaces like warehouses.
“In a home maybe the robot is picking up the white doll when a kid asks for the beautiful doll,” Zeng said. “Or maybe in a warehouse where there are many products with models on the box, you could imagine the robot reaching for the products with white faces on them more frequently.”
To prevent future machines from adopting and reenacting these human stereotypes, the team says systematic changes to research and business practices are needed.
“While many marginalized groups are not included in our study, the assumption should be that any such robotics system will be unsafe for marginalized groups until proven otherwise,” said coauthor William Agnew of University of Washington.
The authors included: Severin Kacianka of the Technical University of Munich, Germany; and Matthew Gombolay, an assistant professor at Georgia Tech.
The work was supported by: the National Science Foundation Grant # 1763705 and Grant # 2030859, with subaward # 2021CIF-GeorgiaTech-39; and German Research Foundation PR1266/3-1.
Here’s a link to and a citation for the paper,
Robots Enact Malignant Stereotypes by Andrew Hundt, William Agnew, Vicky Zeng, Severin Kacianka, Matthew Gombolay. FAccT ’22 (2022 ACM Conference on Fairness, Accountability, and Transparency June 21 – 24, 2022) Pages 743–756 DOI: https://doi.org/10.1145/3531146.3533138 Published Online: 20 June 2022
As noted in part 1, I’ve taken a very broad approach to this survey of science culture in Canada over the last 10 years. It isn’t exhaustive but part 1 covers science communication, science media (mainstream and others such as blogging) and arts as exemplified by music and dance. Now it’s time for part 2 and the visual arts, festivals, science slams, and more..
Art/Sci or Art/Science or SciArt—take your pick
In 2005 my heart was broken. I had to give up on an event I’d conceived and tried to organize for five years, ‘Twisted: an art/science entrée’. Inspired by an art/science organization in New York, it just wasn’t the right timing for Vancouver or, it seems, for Canada, if the failure of an art/science funding collaboration between the Canada Council and the Natural Sciences and Engineering Council of Canada (NSERC) during roughly during that time period could be considered as another indicator.
The situation has changed considerably during this last decade (or so it seems). There are more performing and visual artists using scientific ideas and principles as inspiration for their work or they’re collaborating outright with scientists, or scientists are expressing themselves through artistic endeavours. Of course, of consequences of all this activity is a naming issue. (Isn’t there always?) I’m not taking sides all i want is clarity.
Part 1 featured more of the ‘inspirational’ art/science efforts. Here you’ll find the more ‘science’ inflected efforts.
ArtSci Salon located at the University of Toronto was founded in 2010 according to its About webpage,
This website documents the activity of the ArtSci Salon, a group of artists, scientists and art-sci-tech enthusiasts meeting once a month to engage in critical discussions on topics at the intersection between the arts and science.
Started in 2010 as a spin-off of the Subtle Technologies Festival, ArtSciSalon responds to the recent expansion in the GTA [Greater Toronto Area] of a community of scientists and artists increasingly seeking collaborations across disciplines to successfully accomplish their research projects and inquiries.
Based on the demographic, the requisites, and the interests of our members, the goal of ArtSci Salon is:
To provide outreach opportunities for local and international innovative research projects in the Sciences and in the Arts;
To foster critical dialogue on topics and concerns shared by the sciences and the arts;
To facilitate new forms of collaboration across fields.
Our guests deliver short presentations, demonstrations or performances on a series of shared topic of interest to artists and scientists.
…
Many, many ArtSci Salon events have been listed here. I mention it because the ArtSci Salon website doesn’t have a complete listing for its previous events. While I can’t guarantee completeness, you can perform an ‘ArtSci Salon’ search on the blog search engine and it should get you enough to satisfy your curiosity.
Curiosity Collider‘s first event seems to have been in April 2015 (as noted in my July 7, 2015 posting). i wonder what they’ll do to celebrate their fifth anniversary? Anyway, they describe themselves this way (from the Mandate webpage),
Curiosity Collider Art-Science Foundation is a Vancouver based non-profit organization that is committed to providing opportunities for artists whose work expresses scientific concepts and scientists who collaborate with artists. We challenge the perception and experience of science in our culture, break down the walls between art and science, and engage our growing community to bringing life to the concepts that describe our world.
…
You can find Curiosity Collider here. I see they don’t have anything scheduled yet for 2020 but they had a very active Fall 2019 season and I expect they needed a breather and now there’s ‘flattening the COVID-19 curve’.
Once Curiosity Collider gets started again, you’ll find they put on different kinds of events, usually evening get togethers featuring various artists and scientists in a relaxed environment or joint events with other groups such Nerd Nite, Science Slam, and others. In 2019, Curiosity Collider hosted its first festival. You’ll find more about that in the Festivals subsection further down in this posting.
ArtSci at Cape Breton University (Nova Scotia) seems to have existed from March 2017 to November 2018. At. least, that’s the period its Twitter feed was active.
Art the Science is according to its homepage, “A Canadian Science-Art non-profit organization.” According to their About webpage,
… Art the Science facilitates cross-disciplinary relationships between artists and scientists with a goal of fostering Canadian science-art culture. In doing so, we aim to advance scientific knowledge communication to benefit the public, while providing opportunities for artists to exhibit their work in unconventional and technologically innovative ways. By nurturing the expression of creativity, be it in a test-tube or with the stroke of a brush, Art the Science has become one of the most beloved and popular online SciArt (science + art) communities in the world. Since 2015, it has developed numerous digital SciArt exhibitions, and has highlighted the work of both pioneering and upcoming SciArt artists internationally. The organization also promotes the role of SciArt by conducting various outreach initiatives, including delivering lectures and keynote presentations designed to foster public engagement and a deeper appreciation of science and art.
Volunteer Run: Since 2015, Art the Science has been operating with the hard work and dedication of volunteer hours from our board and supporters. We have been busy generating evidence to show the impact and reach of our initiatives. We believe this evidence will help us secure financial support as we move forward.
…
Their site features information about artist residencies in research laboratories, online exhibitions, and a blog focused on the artists and scientists who create.
National events, festivals, and conferences
These days it’s called Science Odyssey and takes place in May of each year. I first came across the then named National Science and Technology Week in 1993. The rebranding occurred in 2016 after the Liberals swept into victory in October 2015 federal election.
Science Odyssey
In 2020, Science Odyssey (as noted previously, prior to 2016 this was known as National Science and Technology Week and was held in October each year) it was slated to take place from May 2 to May 17. In most years, it functions as a kind of promotional hub for science events independently organized across the country. The focus is largely on children as you can see in the 2019 promotional video,
Cancelled for 2020, its events have ranged from an open house at a maker lab to lectures at universities to festivals such as Pint of Science and Science Rendezvous that occur during Science Odyssey. (I profiled Science Odyssey, Pint of Science, Science Rendezvous and more in my May 1, 2019 posting.)
Pint of Science
Beer and science is a winning combination as they know in the UK where Pint of Science was pioneered in 2012. Pint of Science Canada was started in 2016 and is scheduled for May 11 – 13, 2020,
Pint of Science Canada invites scientists to your favorite local bars to discuss their latest research and discoveries over a drink or two. This is the perfect opportunity to meet scientists and ask questions. You have no excuse not to come and share a drink with us!
Démystifier la recherche scientifique et la faire découvrir au grand public dans un cadre détendu, avec une bière à la main c’est possible. Parce que oui, la science peut être le fun!
…
There isn’t a cancellation notice on the website as of April 15, 2020 but I suspect that may change.
Science Rendezvous
Billing itself as a free national kick-off festival for Science Odyssey and the country’s largest celebration of science and engineering, it was founded in 2008 and was confined to Toronto in that first year. In 2019, they promoted over 300 events across the country.
This year, Science Rendezvous is scheduled for May 9, 2020. Please check as it is likely cancelled for 2020.
Science Literacy Week
This week first crossed my radar in 2015 and because I love this passage, here’s an excerpt from my Sept 18, 2015 posting where it’s first mentioned,
Just as Beakerhead ends, Canada’s 2015 Science Literacy Week opens Sept. 21 – 27, 2015. Here’s more about the week from a Sept. 18, 2015 article by Natalie Samson for University Affairs,
On Nov. 12 last year [2014], the European Space Agency landed a robot on a comet. It was a remarkable moment in the history of space exploration and scientific inquiry. The feat amounted to “trying to throw a dart and hit a fly 10 miles away,” said Jesse Hildebrand, a science educator and communicator. “The math and the physics behind that is mindboggling.”
Imagine Mr. Hildebrand’s disappointment then, as national news programs that night spent about half as much time reporting on the comet landing as they did covering Barack Obama’s gum-chewing faux pas in China. For Mr. Hildebrand, the incident perfectly illustrates why he founded Science Literacy Week, a Canada-wide public education campaign celebrating all things scientific.
From Sept. 21 to 27 [2015], several universities, libraries and museums will highlight the value of science in our contemporary world by hosting events and exhibits on topics ranging from the lifecycle of a honeybee to the science behind Hollywood films like Jurassic World and Contact.
Mr. Hildebrand began developing the campaign last year, shortly after graduating from the University of Toronto with a bachelor’s degree in ecology and evolutionary biology. He approached the U of T Libraries for support and “it really snowballed from there,” the 23-year-old said.
…
In 2020, Science Literacy Week will run from September 21 – 27. (I hope they are able to go forward with this year’s event.) Here’s how the ‘Week’ has developed since 2015, from its About webpage,
…
The latest edition of Science Literacy Week came to include over 650 events put on by more than 300 partners in over 250 cities across Canada. From public talks to explosive chemistry demos, stargazing sessions to nature hikes, there was sure to be an interesting activity for science lovers of all ages. Science Literacy Week is powered by the Natural Sciences and Engineering Research Council of Canada (NSERC).
Beaming Science, Exploration, Adventure and Conservation into Classrooms Across North America
Guest Speakers and Virtual Field Trips with Leading Experts from Around the World
Using Technology to Broadcast Live into Classrooms from the Most Remote Regions on the Planet Since
September 2015, We’ve Run Well over 1,000 Live Events Connecting Hundreds of Thousands of Students to Scientists and Explorers in over 70 Countries
Onto another standalone festival.
Beakerhead
Calgary’s big art/science/engineering festival, Beakerhead got its start in 2013 as a five-day event as per my December 7, 2012 post. It’s gone through a few changes since then including what appears to be a downsizing. The 2019 event was on September 21, 2019 from 5 pm to 11 pm.
According to his profile on LinkedIn, Jeff Popiel is Beakerhead’s interim CEO and has been since 2018. Mary Anne Moser (one of Breakerhead’s co-founders; the other is Jay Ingram, formerly of the Daily Planet science television show) was welcomed as the new Executive Director for Calgary’s science centre, Telus Spark, in April 2019.
Beakerhead’sr Wikipedia entry, despite being updated in December 2019, lists as its most current iteration of the festival that one that place in 2018.
All organizations experience ups and downs; I certainly hope that this represents a temporary lull. On the plus side, the Beakerhead Twitter feed is being kept current. and there is a February 18, 2020 entry on the Beakerhead’s homepage.
Invasive Species (Curiosity Collider) & Special Projects (ArtSci Salon)
The first and possibly only Collisions Festival (from the Curiosity Collider folks), Invasive Species took place in November 2019. A three-day affair, it featured a number of local (Vancouver area) artist/scientist collaborations. For a volunteer-run organization, putting on a three-day festival is quite an accomplishment. So, brava and bravo!
The ArtSci Salon in Toronto hasn’t held any festivals as such but has hosted a number of ‘special projects’ which extend over days and/or weeks and/or months such as The Cabinet Project, which opened in April 2017 (not sure how long it ran) and featured a number of artists’ talks and tours; Emergent Form from April 1 -30, 2018; EDITED (gene editing) from October 25 – November 30, 2018; and, FACTT-Evolution from March 29 – May 15, 2019.
International conferences and the Canadian art/technology scene
I am sure there are others (I’d be happy to hear about them in the comments) but these two organizations seem particularly enthused about holding conferences in Canada. I would like to spend more time on art and technology in Canada but that’s a huge topic in itself so I’m touching on it lightly.
ISEA 2015 and 2020
Formerly the Inter-Society of Electronic Arts, the organization has rebranded itself as ISEA (pronounced as a word [acronym] with a long ‘s’ like ‘z’). The acronym is used both for the organization’s name, the International Society for Electronic Arts, and its annual International Symposium of Electronic Arts, known familiarly as ISEA (year).
ISEA 2015 took place in Vancouver and was held in August of that year (you can read more about in my April 24, 2015 posting where I announced my presentation of a paper and video “Steep (1): A digital poetry of gold nanoparticles.”).
The upcoming ISEA 2020 was to take place in Montréal from May 19 – 24 but has been rescheduled for October 13 – 18. The theme remains: Why Sentience? Here’s more from the 2020 symposium About page,
Montreal Digital Spring (Printemps numérique) is proud to present ISEA2020 from October 13 to 18, 2020 in Montreal.
ISEA2020 will be the Creativity Pavilion of MTL connect; using digital intelligence as the overarching theme, this international event aims to look across the board at the main questions related to digital development, focusing on its economic, social, cultural and environmental impacts in various sectors of activity.
Montreal was awarded host of the next edition of ISEA in the closing ceremony of ISEA2019, held in Gwangju, South Korea. Soh Yeong Roh, Director of Art Center Nabi in Seoul, hand over the eternal light to Mehdi Benboubakeur, Executive Director of Montreal Digital Spring. As Benboubakeur stated: “ISEA returns to Montreal after 25 years. Back in 1995, ISEA positioned Montreal as a digital art center and brought emerging local artists into the international spotlight. In 2020, Montreal will once more welcome the international community of ISEA and will use this opportunity to build a strong momentum for the future.”
SEA 2020 turns towards the theme of “Why Sentience? Sentience describes the ability to feel or perceive. ISEA2020 will be fully dedicated to examining the resurgence of sentience—feeling-sensing-making sense—in recent art and design, media studies, science and technology studies, philosophy, anthropology, history of science and the natural scientific realm—notably biology, neuroscience and computing. We ask: why sentience? Why and how does sentience matter? Why have artists and scholars become interested in sensing and feeling beyond, with and around our strictly human bodies and selves? Why has this notion been brought to the fore in an array of disciplines in the 21st century?
I notice Philippe Pasquier of Simon Fraser University (Surrey campus, Vancouver area) is a member of the organizing committee. If memory serves, he was also on the organizing committee for ISEA 2015. He was most recently mentioned here in a November 29, 2019 where I featured his Metacreation Lab and when I mentioned the ISEA 2020 call for submissions.
… We received a total of 987 submissions from 58 countries. Thank you to those who took the time to create and submit proposals for ISEA2020 under the theme of sentience. We look forward to seeing you in Montreal from May 19 to 24, 2020 during MTL connect/ISEA2020!
Statistics by categories:
Artworks: 536
Artist talks: 121
Full papers: 108
Short papers: 96
Workshops / Tutorials: 53
Panels / Roundtables: 24
Institutional presentations: 22
Posters / Demos: 18
Good luck to everyone who made a submission. I hope you get a chance to present your work at ISEA 2020. I wonder if I can attend. I’ll have to make up my mind soon as they stop selling early bird tickets on and around March 16, 2020.
SIGGRAPH
The Association for Computing Machinery (ACM), founded in 1947, has a special interest group (SIG) dedicated to computer GRAPHics. Hence, there is SIGGRAPH, which holds an annual conference each in North America and in Asia.
Vancouver hosted SIGGRAPH in 2011, 2014, and 2018 and will host it again in 2022. It is the only Canadian city to have hosted a SIGGRAPH conference since the conference’s inception in 1974. It is a huge meeting. In 2018, Vancouver hosted 16,637 attendees.
If you have a chance, do check out the next SIGGRAPH that you are able to attend. As inspiration you can check out the profile I wrote up for the most recent conference in Vancouver (my August 9, 2018 posting). They’re not as open to the public as I’d like but there are a few free events.
Coffee, tea, or beer with your science?
There are many ways to enjoy your science.Here are various groups (volunteer for the most part) that host regular (more or less) science nights at cafés and/or pubs and/or bars. Although I mentioned Café Scientifique Vancouver in part 1, it doesn’t really fit into either part 1 or part 2 of this review of the last decade but it’s being included (in a minor way) because the parent organization, Café Scientifique, is in a sense the progenitor for all the other ‘Café’ type efforts (listed in this subsection) throughout Canada. In addition, Café Scientifique is a truly global affair, which means if you’re traveling, it’s worth checking out the website to see if there’s any event in the city you’re visiting.
Science Slam Canada
I’m so glad to see that we have a Science Slam community in Canada (the international phenomenon was featured here in a July 17, 2013 posting). Here’s more about the phenomenon from the Science Slam Canada homepage,
Science slams have been popular in Europe for more than a decade but have only recently gained traction in North America. Science Slam Canada was founded in 2016 and now runs regular science slams in Vancouver. Given wide interest and support, Science Slam Canada is continuing to grow, with upcoming events in Edmonton and Ottawa.
Based on the format of a poetry slam, a science slam is a competition that allows knowledge holders, including researchers, students, educators, professionals, and artists to share their science with a general audience. Competitors have five minutes to present on any science topic and are judged based on communication skills, audience engagement, and scientific accuracy. Use of a projector or slideshow is not allowed, but props and creative presentation styles are encouraged.
The slam format provides an informal medium for the public and the scientific community to connect with and learn from each other. Science slams generally take place in bars, cafes, or theaters, which remove scientists from their traditional lecture environments. The lack of projector also takes away a common presentation ‘crutch’ and forces competitors to engage with their audience more directly.
Competitors and judges are chosen through a selection process designed to support diversity and maximize the benefit to speakers and the audience. Past speakers have ranged from students and researchers to educators and actors. Judges have included professors, media personalities, comedians and improvisers. And since the event is as much about the audience as about the speakers, spectators are asked to vote for their favourite speaker.
Our dream is to create a national network of local science slams, with top competitors meeting at a national SUPER Slam to face off for the title of Canadian Science Slam Champion. This past year, we ran a regional slam in Vancouver, bringing together speakers from across BC’s Lower Mainland. Next year, we hope to extend our invitation even further.
Their last Vancouver Slam was in November 2019. I don’t see anything scheduled for 2020 either on the website or on their Twitter feed. Of course, they don’t keep a regular schedule so my suggestion is to keep checking. And, there’s their Facebook site.
Alan Shapiro who founded Science Slam Canada maintains an active Twitter feed where his focus appears to be water but he includes much more. If you’re interested in Vancouver’s science scene, check him out. By the way, his day job is at STEMCELL Technologies, which you may remember, if you read part 1, funds the Science in the City website mentioned under the Science blogging in Canada subhead (scroll down about 50% of the way).
Nerd Nite
Sometime around 2003, Chris Balakrishnan founded Nerd Nite. Today, he’s a professor with his own lab (Balakrishnan Laboratory of Evolution, Behavior and Other Fine Sciences) at East Carolina University; he also maintains an active interest in Nerd Nite.
I’m not sure when it made its way to Canada but there are several cities which host Nerd Nites (try ‘nerd nite canada’ in one of the search engines). In addition to Nerd Nite Vancouver (which got its start in 2013, if it’s existence on Twitter can be used as evidence), I found ones in Toronto, Kitchener-Waterloo, Edmonton, Calgary, and, I believe there is also one in North Vancouver.
Their events are monthly (more or less) and the last one was on February 26, 2020. You can read more about it here. They maintain an active Twitter feed listing their own events and, on occasion, other local science events.
Story Collider
This US organization (Story Collider; true personal stories about science) was founded in 2010 and was first featured here in a February 15, 2012 posting. Since then, it has expanded to many cities including Vancouver. Here’s more about the organization and its worldwide reach (from the Story Collider About Us webpage), Note: Links have been removed,
The Story Collider is a 501(c)3 nonprofit organization dedicated to true, personal stories about science. Since 2010, we have been working with storytellers from both inside and outside science to develop these stories, and we share them through our weekly podcast and our live shows around the world.
We bring together dedicated staff and volunteers from both science and art backgrounds to produce these shows — starting with our executive director, Liz Neeley, who has a background in marine biology and science communication, and our artistic director, Erin Barker, a writer and experienced storyteller — because we believe both have value in this space. Currently, The Story Collider has a home in fourteen cities — New York, Boston, DC, Los Angeles, St. Louis, Atlanta, Chicago, Dallas, Seattle, Milwaukee, Toronto, Vancouver, Cambridge, UK, and Wellington, New Zealand — where events organized by local producers are held on a monthly or quarterly basis. We’ve also been delighted to work with various partners — including publishers such as Springer Nature and Scientific American; conferences for organizations such as the American Geophysical Union and the Gulf of Mexico Research Initiative; and universities such as Yale University, North Carolina State University, Colorado University, and more — to produce shows in other locations. Every year, we produce between 50 and 60 live events featuring more than 250 stories in total, and we share over a hundred of these stories on our podcast.
…
Vancouver’s first Story Collider of 2020, ‘Misfits’ was scheduled for February 1 at The Fox Cabaret at 2321 Main Street . You can see more about the event (which in all likelihood took place) and the speakers here.
As for when Story Collider set down a few roots in Vancouver, that’s likely to be some time after February 2012. The two Vancouver Story Collider organizers, Kayla Glynn and Josh Silberg each have active Twitter feeds. Glynn is focuses mainly on local events; Silberg provides a more eclectic experience.
Brain Talks
This is a series of neuroscience’ talks held monthly (more or less) held at Vancouver General Hospital. They served wine out of a box and cheese and crackers at the one talk (it was about robots) I attended. Here’s more about the inspiration for this series from the University of British Columbia Brain Talks Vision page
BrainTalks is a forum for academics and members of the general public to create a dialogue about the rapidly expanding information in neuroscience. The BrainTalks series, was inspired in part by the popularity of the TED Talks series. Founded by Dr. Maia Love in October 2010, the goal is for neuroscientists, neurologists, neuroradiologists, psychiatrists, and people from affiliated fields to meet and dialogue monthly, in the hopes of promoting excellence in research, facilitating research and clinician connections and discussion, and disseminating knowledge to the general public. Additionally, the hope to reduce stigma associated with mental illness, and promote compassion for those suffering with brain illnesses, be they called neurologic or psychiatric, was part of the reason to create the series.
The structure is a casual environment with brief presentations by local experts that challenge and inspire dialogue. Discussions focus on current knowledge about the mind and our understanding of how the mind works. Presentations are followed by a panel discussion, catered snacks, and networking.
BrainTalks is now part of the programming for the University of British Columbia’s Department of Psychiatry. The Department of Education, and the Department of Continuing Professional Development include BrainTalks at UBC as part of their goal to enhance public knowledge of psychiatry, enhance clinician knowledge in areas that may affect psychiatric practice, and disseminate recent research in brain science to the public.
SoapBox Science
Thanks to Alan Shapiro (founder of Science Slam Canada) and his Twitter feed for information about a new science event that may be coming to Vancouver, SoapBox Science founded in the UK in 2011 puts on events that can be found worldwide (from the homepage),
Soapbox Science is a novel public outreach platform for promoting women scientists and the science they do. Our events transform public areas into an arena for public learning and scientific debate; they follow the format of London Hyde Park’s Speaker’s Corner, which is historically an arena for public debate. With Soapbox Science, we want to make sure that everyone has the opportunity to enjoy, learn from, heckle, question, probe, interact with and be inspired by some of our leading scientists. No middle man, no PowerPoint slide, no amphitheatre – just remarkable women in science who are there to amaze you with their latest discoveries, and to answer the science questions you have been burning to ask. Look out for bat simulators, fake breasts or giant pictures of volcanoes. Or simply hear them talk about what fascinates them, and why they think they have the most fantastic job in the world!
…
2020 is an exciting year for us. We are running 56 events around the world, making this the biggest year yet! Since 2011 we have featured over 1500 scientists and reached 150,000 members of the public! Soapbox Science was commended by the Prime Minister in 2015, and was awarded a Silver Medal from the Zoological Society of London (ZSL) in June 2016. Both Soapbox Science co-founders were also invited to provide oral evidence at a 2016 Parliamentary inquiry on science communication.
I believe 2020 is/was to have been the first year for a SoapBox Science event in Vancouver. There aren’t any notices of cancellation for the Vancouver event that I’ve been able to find. I expect there will although with a planned June 2020 date there’s still hope, In any case, you might find it interesting to view their ‘Apply to speak’ webpage, (Note: I have rearranged the order of some of these paragraphs),
Are you a woman* who works in science and who is passionate about your research? Are you eager to talk to the general public about your work in a fun, informal setting? If so, then Soapbox Science needs YOU! We are looking for scientists in all areas of STEMM, from PhD students to Professors, and from entry-level researchers to entrepreneurs, to take part in this grassroots science outreach project.
*Soapbox Science uses an inclusive definition of ‘woman’ and welcomes applications from Non-binary and Genderqueer speakers.
The deadline for applications has now passed but you’ll find on their ‘Apply to speak’ webpage, a list of cities hosting 2020 SoapBox Science events,
Argentina: Tucumán- 12th September
Australia: Armidale- August Sydney- 15th August Queensland- August
Belgium: Brussels- 27th June
Brazil: Maceio- 22nd November Rio de Janeiro- 18th July Salvador- 5th June
Canada: Calgary- 2nd May Halifax- July Hamilton- Date TBC Ottawa- 19th September Québec- June Toronto- 27th September St John’s- 5th September Vancouver- June Waterloo- 13th June Winnipeg- May
Germany: Berlin- June Bonn- May Düsseldorf- 25th July Munich- 27th June
Ireland: Dublin- Date TBC Cork- July Galway- July
Nigeria: Lagos- August Lagos- 7th November
Malaysia: Kuala Lumpur- April
Portugal: Lisbon- 19th Sept
South Africa: Cape Town- September
Sweden: Uppsala- 16th May Gothenburg- 24th April- Closing date 31st January
Tanzania: Arusha- 8th August
UK: Aberdeen- 30th May Birmingham- Date TBC Brighton- 30th May Bristol- 4th July Cardiff- Date TBC Edinburgh- Date TBC Exeter- June Keswick- 26th May Leicester- 6th June Leeds- July London- 23rd May Milton Keynes- 27th June Newcastle- 13th June Nottingham- Date TBC Plymouth- 30th May Stoke-on-Trent, Date TBC Swansea- Date TBC York- 13th June
USA: Boulder- 26th April Denver- Date TBC Detroit- September Philadelphia- 18th April
If you’re interested in the SoapBox Science Vancouver event,there’s more on this webpage on the University of British Columbia website and/or this brochure for the Vancouver event.
This is strictly for folks who have media accreditation. First, the news about the summit and then some detail about how you might accreditation should you be interested in going to Switzerland. Warning: The International Telecommunications Union which is holding this summit is a United Nations agency and you will note almost an entire paragraph of ‘alphabet soup’ when all the ‘sister’ agencies involved are listed.
Geneva, 21 March 2019 Artificial Intelligence (AI) has taken giant leaps forward in recent years, inspiring growing confidence in AI’s ability to assist in solving some of humanity’s greatest challenges. Leaders in AI and humanitarian action are convening on the neutral platform offered by the United Nations to work towards AI improving the quality and sustainability of life on our planet. The 2017 summit marked the beginning of global dialogue on the potential of AI to act as a force for good. The action-oriented 2018 summit gave rise to numerous ‘AI for Good’ projects, including an ‘AI for Health’ Focus Group, now led by ITU and the World Health Organization (WHO). The 2019 summit will continue to connect AI innovators with public and private-sector decision-makers, building collaboration to maximize the impact of ‘AI for Good’.
Organized by the International Telecommunication Union (ITU) – the United Nations specialized agency for information and communication technology (ICT) – in partnership with the XPRIZE Foundation, the Association for Computing Machinery (ACM) and close to 30 sister United Nations agencies, the 3rd annual AI for Good Global Summit in Geneva, 28-31 May, is the leading United Nations platform for inclusive dialogue on AI. The goal of the summit is to identify practical applications of AI to accelerate progress towards the United Nations Sustainable Development Goals.
Media are recommended to register in advance to receive key announcements in the run-up to the summit.
WHAT: The summit attracts a cross-section of AI experts from industry and academia, global business leaders, Heads of UN agencies, ICT ministers, non-governmental organizations, and civil society.
The summit is designed to generate ‘AI for Good’ projects able to be enacted in the near term, guided by the summit’s multi-stakeholder and inter-disciplinary audience. It also formulates supporting strategies to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits.
The 2019 summit will highlight AI’s value in advancing education, healthcare and wellbeing, social and economic equality, space research, and smart and safe mobility. It will propose actions to assist high-potential AI solutions in achieving global scale. It will host debate around unintended consequences of AI as well as AI’s relationship with art and culture. A ‘learning day’ will offer potential AI adopters an audience with leading AI experts and educators.
A dynamic show floor will demonstrate innovations at the cutting edge of AI research and development, such as the IBM Watson live debater; the Fusion collaborative exoskeleton; RoboRace, the world’s first self-driving electric racing car; avatar prototypes, and the ElliQ social robot for the care of the elderly. Summit attendees can also look forward to AI-inspired performances from world-renowned musician Jojo Mayer and award-winning vocal and visual artist Reeps One
WHEN: 28-31 May 2019 WHERE: International Conference Centre Geneva, 17 Rue de Varembé, Geneva, Switzerland
WHO: Over 100 speakers have been confirmed to date, including:
Jim Hagemann Snabe – Chairman, Siemens Cédric Villani – AI advisor to the President of France, and Mathematics Fields Medal Winner Jean-Philippe Courtois – President of Global Operations, Microsoft Anousheh Ansari – CEO, XPRIZE Foundation, Space Ambassador Yves Daccord – Director General, International Committee of the Red Cross Yan Huang – Director AI Innovation, Baidu Timnit Gebru – Head of AI Ethics, Google Vladimir Kramnik – World Chess Champion Vicki Hanson – CEO, ACM Zoubin Ghahramani – Chief Scientist, Uber, and Professor of Engineering, University of Cambridge Lucas di Grassi – Formula E World Racing Champion, CEO of Roborac
Confirmed speakers also include C-level and expert representatives of Bosch, Botnar Foundation, Byton, Cambridge Quantum Computing, the cities of Montreal and Pittsburg, Darktrace, Deloitte, EPFL, European Space Agency, Factmata, Google, IBM, IEEE, IFIP, Intel, IPSoft, Iridescent, MasterCard, Mechanica.ai, Minecraft, NASA, Nethope, NVIDIA, Ocean Protocol, Open AI, Philips, PWC, Stanford University, University of Geneva, and WWF.
Please visit the summit programme for more information on the latest speakers, breakthrough sessions and panels.
The summit is organized in partnership with the following sister United Nations agencies:CTBTO, ICAO, ILO, IOM, UNAIDS, UNCTAD, UNDESA, UNDPA, UNEP, UNESCO, UNFPA, UNGP, UNHCR, UNICEF, UNICRI, UNIDIR, UNIDO, UNISDR, UNITAR, UNODA, UNODC, UNOOSA, UNOPS, UNU, WBG, WFP, WHO, and WIPO.
The 2019 summit is kindly supported by Platinum Sponsor and Strategic Partner, Microsoft; Gold Sponsors, ACM, the Kay Family Foundation, Mind.ai and the Autonomous Driver Alliance; Silver Sponsors, Deloitte and the Zero Abuse Project; and Bronze Sponsor, Live Tiles.
More information available at aiforgood.itu.int Join the conversation on social media using the hashtag #AIforGood
To gain media access, ITU must confirm your status as a bona fide member of the media. Therefore, please read ITU’s Media Accreditation Guidelines below so you are aware of the information you will be required to submit for ITU to confirm such status. Media accreditation is not granted to 1) non-editorial staff working for a publishing house (e.g. management, marketing, advertising executives, etc.); 2) researchers, academics, authors or editors of directories; 3) employees of information outlets of public, non-governmental or private entities that are not first and foremost media organizations; 4) members of professional broadcasting or media associations, 5) press or communication professionals accompanying member state delegations; and 6) citizen journalists under no apparent editorial board oversight. If you have questions about your eligibility, please email us at pressreg@itu.int.
Applications for accreditation are considered on a case-by-case basis and ITU reserves the right to request additional proof or documentation other than what is listed below. Media accreditation decisions rest with ITU and all decisions are final.
Accreditation eligibility & credentials 1. Journalists* should provide an official letter of assignment from the Editor-in-Chief (or the News Editor for radio/TV). One letter per crew/editorial team will suffice. Editors-in-Chief and Bureau Chiefs should submit a letter from their Director. Please email this to pressreg@itu.int along with the required supporting credentials, based on the type of media organization you work for:
Print and online publications should be available to the general public and published at least 6 times a year by an organization whose principle business activity is publishing and which generally carries paid advertising; o please submit 2 copies or links to recent byline articles published within the last 4 months.
News wire services should provide news coverage to subscribers, including newspapers, periodicals and/or television networks; o please submit 2 copies or links to recent byline articles or broadcasting material published within the last 4 months.
Broadcastmedia should provide news and information programmes to the general public. Independent film and video production companies can only be accredited if officially mandated by a broadcast station via a letter of assignment; o please submit broadcasting material published within the last 4 months.
Freelance journalists and photographers must provide clear documentation that they are on assignment from a specific news organization or publication. Evidence that they regularly supply journalistic content to recognized media may be acceptable in the absence of an assignment letter and at the discretion of the ITU Corporate Communication Division. o if possible, please submit a valid assignment letter from the news organization or publication.
2. Bloggers and community media may be granted accreditation if the content produced is deemed relevant to the industry, contains news commentary, is regularly updated and/or made publicly available. Corporate bloggers may register as normal participants (not media). Please see Guidelines for Bloggers and Community Media Accreditation below for more details:
Special guidelines for bloggers and community media accreditation
ITU is committed to working with independent and ‘new media’ reporters and columnists who reach their audiences via blogs, podcasts, video blogs, community or online radio, limited print formats which generally carry paid advertising and other online media. These are some of the guidelines we use to determine whether to accredit bloggers and community media representatives:
ITU reserves the right to request traffic data from a third party (Sitemeter, Technorati, Feedburner, iTunes or equivalent) when considering your application. While the decision to grant access is not based solely on traffic/subscriber data, we ask that applicants provide sufficient transparency into their operations to help us make a fair and timely decision. If your media outlet is new, you must have an established record of having written extensively on ICT issues and must present copies or links to two recently published videos, podcasts or articles with your byline.
Obtaining media accreditation for ITU events is an opportunity to meet and interact with key industry and political figures. While continued accreditation for ITU events is not directly contingent on producing coverage, owing to space limitations we may take this into consideration when processing future accreditation requests. Following any ITU event for which you are accredited, we therefore kindly request that you forward a link to your post/podcast/video blog to pressreg@itu.int.
Bloggers who are granted access to ITU events are expected to act professionally. Those who do not maintain the standards expected of professional media representatives run the risk of having their accreditation withdrawn. UN-accredited media
Media already accredited and badged by the United Nations are automatically accredited and registered by ITU. In this case, you only need to send a copy of your UN badge to pressreg@itu.intto make sure you receive your event badge. Anyone joining an ITU event MUST have an event badge in order to access the premises. Please make sure you let us know in advance that you are planning to attend so your event badge is ready for printing and pick-up.
While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.
Introduction
For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),
Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.
Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.[1] The recent paper acceptance rate for SIGGRAPH has been less than 26%.[2] The submitted papers are peer-reviewed in a single-blind process.[3] There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress.[4][5] …
This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014. The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,
While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.
“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”
…
SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”
That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.
CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.
All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.
“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”
Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.
The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.
The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”
The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.
Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.
###
About ACM, ACM SIGGRAPH, and SIGGRAPH 2018
ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.
They have provided an image illustrating what they mean (I don’t find it especially informative),
Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn
Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.
Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.
“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”
For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.
SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.
“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”
This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”
Apparently this is a still from the ‘short’,
Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios
Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.
Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.
“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”
To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.
Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec
to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.
The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)
Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.
Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.
“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”
I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,
Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck
Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.
“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”
The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.
“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”
Predicting sound
Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.
Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.
“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.
The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.
Challenges ahead
In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.
And, even in its current state, the results are worth the wait.
“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”
Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.
Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.
Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,
The researchers have also provided this image,
By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)
It does seem like we’re synthesizing the world around us, eh?
SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.
The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.
Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”
He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”
Highlights from the 2018 Art Gallery include:
Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver
TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.
Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara
Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”
Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University
Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.
In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.
The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.
To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.
“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.
Art Papers highlights include:
Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth
This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.
Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong
The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.
Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University
“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.
What’s the what?
My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.
With all the talk about artificial intelligence (AI), a lot more attention seems to be paid to apocalyptic scenarios: loss of jobs, financial hardship, loss of personal agency and privacy, and more with all of these impacts being described as global. Still, there are some folks who are considering and working on ‘AI for good’.
If you’d asked me, the International Telecommunications Union (ITU) would not have been my first guess (my choice would have been United Nations Educational, Scientific and Cultural Organization [UNESCO]) as an agency likely to host the 2018 AI for Good Global Summit. But, it turns out the ITU is a UN (United Nations agency) and, according to its Wikipedia entry, it’s an intergovernmental public-private partnership, which may explain the nature of the participants in the upcoming summit.
The news
First, there’s a May 4, 2018 ITU media advisory (received via email or you can find the full media advisory here) about the upcoming summit,
Artificial Intelligence (AI) is now widely identified as being able to address the greatest challenges facing humanity – supporting innovation in fields ranging from crisis management and healthcare to smart cities and communications networking.
The second annual ‘AI for Good Global Summit’ will take place 15-17 May [2018] in Geneva, and seeks to leverage AI to accelerate progress towards the United Nations’ Sustainable Development Goals and ultimately benefit humanity.
WHAT: Global event to advance ‘AI for Good’ with the participation of internationally recognized AI experts. The programme will include interactive high-level panels, while ‘AI Breakthrough Teams’ will propose AI strategies able to create impact in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society – through interactive sessions. The summit will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.
A special demo & exhibit track will feature innovative applications of AI designed to: protect women from sexual violence, avoid infant crib deaths, end child abuse, predict oral cancer, and improve mental health treatments for depression – as well as interactive robots including: Alice, a Dutch invention designed to support the aged; iCub, an open-source robot; and Sophia, the humanoid AI robot.
WHEN: 15-17 May 2018, beginning daily at 9 AM
WHERE: ITU Headquarters, 2 Rue de Varembé, Geneva, Switzerland (Please note: entrance to ITU is now limited for all visitors to the Montbrillant building entrance only on rue Varembé).
WHO: Confirmed participants to date include expert representatives from: Association for Computing Machinery, Bill and Melinda Gates Foundation, Cambridge University, Carnegie Mellon, Chan Zuckerberg Initiative, Consumer Trade Association, Facebook, Fraunhofer, Google, Harvard University, IBM Watson, IEEE, Intellectual Ventures, ITU, Microsoft, Massachusetts Institute of Technology (MIT), Partnership on AI, Planet Labs, Shenzhen Open Innovation Lab, University of California at Berkeley, University of Tokyo, XPRIZE Foundation, Yale University – and the participation of “Sophia” the humanoid robot and “iCub” the EU open source robotcub.
The interview
Frederic Werner, Senior Communications Officer at the International Telecommunication Union and** one of the organizers of the AI for Good Global Summit 2018 kindly took the time to speak to me and provide a few more details about the upcoming event.
Werner noted that the 2018 event grew out of a much smaller 2017 ‘workshop’ and first of its kind, about beneficial AI which this year has ballooned in size to 91 countries (about 15 participants are expected from Canada), 32 UN agencies, and substantive representation from the private sector. The 2017 event featured Dr. Yoshua Bengio of the University of Montreal (Université de Montréal) was a featured speaker.
“This year, we’re focused on action-oriented projects that will help us reach our Sustainable Development Goals (SDGs) by 2030. We’re looking at near-term practical AI applications,” says Werner. “We’re matchmaking problem-owners and solution-owners.”
Academics, industry professionals, government officials, and representatives from UN agencies are gathering to work on four tracks/themes:
AI and satellite imagery
AI and health
AI and smart cities & communities
Trust in AI
In advance of this meeting, the group launched an AI repository (an action item from the 2017 meeting) on April 25, 2018 inviting people to list their AI projects (from the ITU’s April 25, 2018? AI repository news announcement),
ITU has just launched an AI Repository where anyone working in the field of artificial intelligence (AI) can contribute key information about how to leverage AI to help solve humanity’s greatest challenges.
This is the only global repository that identifies AI-related projects, research initiatives, think-tanks and organizations that aim to accelerate progress on the 17 United Nations’ Sustainable Development Goals (SDGs).
To submit a project, just press ‘Submit’ on the AI Repository site and fill in the online questionnaire, providing all relevant details of your project. You will also be asked to map your project to the relevant World Summit on the Information Society (WSIS) action lines and the SDGs. Approved projects will be officially registered in the repository database.
Benefits of participation on the AI Repository include:
Your project details will become visible to the world on the website.
You will be connected with AI stakeholders, world-wide.
WSIS Prizes recognize individuals, governments, civil society, local, regional and international agencies, research institutions and private-sector companies for outstanding success in implementing development oriented strategies that leverage the power of AI and ICTs.
If you have any questions, please send an email to: ai@itu.int
“Your project won’t be visible immediately as we have to vet the submissions to weed out spam-type material and projects that are not in line with our goals,” says Werner. That said, there are already 29 projects in the repository. As you might expect, the UK, China, and US are in the repository but also represented are Egypt, Uganda, Belarus, Serbia, Peru, Italy, and other countries not commonly cited when discussing AI research.
Werner also pointed out in response to my surprise over the ITU’s role with regard to this AI initiative that the ITU is the only UN agency which has 192* member states (countries), 150 universities, and over 700 industry members as well as other member entities, which gives them tremendous breadth of reach. As well, the organization, founded originally in 1865 as the International Telegraph Convention, has extensive experience with global standardization in the information technology and telecommunications industries. (See more in their Wikipedia entry.)
The AI for Good series is the leading United Nations platform for dialogue on AI. The action-oriented 2018 summit will identify practical applications of AI and supporting strategies to improve the quality and sustainability of life on our planet. The summit will continue to formulate strategies to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits.
While the 2017 summit sparked the first ever inclusive global dialogue on beneficial AI, the action-oriented 2018 summit will focus on impactful AI solutions able to yield long-term benefits and help achieve the Sustainable Development Goals. ‘Breakthrough teams’ will demonstrate the potential of AI to map poverty and aid with natural disasters using satellite imagery, how AI could assist the delivery of citizen-centric services in smart cities, and new opportunities for AI to help achieve Universal Health Coverage, and finally to help achieve transparency and explainability in AI algorithms.
Teams will propose impactful AI strategies able to be enacted in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society. Strategies will be evaluated by the mentors according to their feasibility and scalability, potential to address truly global challenges, degree of supporting advocacy, and applicability to market failures beyond the scope of government and industry. The exercise will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.
“As the UN specialized agency for information and communication technologies, ITU is well placed to guide AI innovation towards the achievement of the UN Sustainable Development Goals. We are providing a neutral close quotation markplatform for international dialogue aimed at building a common understanding of the capabilities of emerging AI technologies.” Houlin Zhao, Secretary General of ITU
Should you be close to Geneva, it seems that registration is still open. Just go to the ITU’s AI for Good Global Summit 2018 webpage, scroll the page down to ‘Documentation’ and you will find a link to the invitation and a link to online registration. Participation is free but I expect that you are responsible for your travel and accommodation costs.
For anyone unable to attend in person, the summit will be livestreamed (webcast in real time) and you can watch the sessions by following the link below,
For those of us on the West Coast of Canada and other parts distant to Geneva, you will want to take the nine hour difference between Geneva (Switzerland) and here into account when viewing the proceedings. If you can’t manage the time difference, the sessions are being recorded and will be posted at a later date.
*’132 member states’ corrected to ‘192 member states’ on May 11, 2018 at 1500 hours PDT.