Tag Archives: Janis Jefferies

A dress that lights up according to reactions on Twitter

I don’t usually have an opportunity to write about red carpet events but the recent Met Gala, also known as the Costume Institute Gala and the Met Ball, which took place on the evening of May 2, 2016 in New York, featured a ‘cognitive’ dress. Here’s more from a May 2, 2016 article by Emma Spedding for The Telegraph (UK),

“Tech white tie” was the dress code for last night’s Met Gala, inspired by the theme of this year’s Met fashion exhibition, ‘Manus x Machina: Fashion in the Age of Technology’. While many of the a-list attendees interpreted this to mean ‘silver sequins’, several rose to the challenge with beautiful, future-gazing gowns which give a glimpse of how our clothes might behave in the future.

Supermodel Karolina Kurkova wore a ‘cognitive’ Marchesa gown that was created in collaboration with technology company IBM. The two companies came together following a survey conducted by IBM which found that Marchesa was one of the favourite designers of its employees. The dress is created using a conductive fabric chosen from 40,000 options and embedded with 150 LED lights which change colour in reaction to the sentiments of Kurkova’s Twitter followers.

A May 2, 2016 article by Rose Pastore for Fast Company provides a little more technical detail and some insight into why Marchesa partnered with IBM,

At the Met Gala in Manhattan tonight [May 2, 2016], one model will be wearing a “cognitive dress”: A gown, designed by fashion house Marchesa, that will shift in color based on input from IBM’s Watson supercomputer. The dress features gauzy white roses, each embedded with an LED that will display different colors depending on the general sentiment of tweets about the Met Gala. The algorithm powering the dress relies on Watson Color Theory, which links emotions to colors, and on the Watson Tone Analyzer, a service that can detect emotion in text.

In addition to the color-changing cognitive dress, Marchesa designers are using Watson to get new color palette ideas. The designers choose from a list of emotions and concepts—things like romance, excitement, and power—and Watson recommends a palette of colors it associates with those sentiments.

An April 29, 2016 posting by Ann Rubin for IBM’s Think blog discusses the history of technology/art partnerships and provides more technical detail (yes!) about this one,

Throughout history, we’ve seen traces of technology enabling humans to create – from Da Vinci’s use of the camera obscura to Caravaggio’s work with mirrors and lenses. Today, cognitive systems like Watson are giving artists, designers and creative minds the tools to make sense of the world in ground-breaking ways, opening up new avenues for humans to approach creative thinking.

The dress’ cognitive creation relies on a mix of Watson APIs, cognitive tools from IBM Research, solutions from Watson developer partner Inno360 and the creative vision from the Marchesa design team. In advance of it making its exciting debut on the red carpet, we’d like to take you on the journey of how man and machine collaborated to create this special dress.

Rooted in the belief that color and images can indicate moods and send messages, Marchesa first selected five key human emotions – joy, passion, excitement, encouragement and curiosity – that they wanted the dress to convey. IBM Research then fed this data into the cognitive color design tool, a groundbreaking project out of IBM Research-Yorktown that understands the psychological effects of colors, the interrelationships between emotions, and image aesthetics.

This process also involved feeding Watson hundreds of images associated with Marchesa dresses in order to understand and learn the brand’s color palette. Ultimately, Watson was able to suggest color palettes that were in line with Marchesa’s brand and the identified emotions, which will come to life on the dress during the Met Gala.

Once the colors were finalized, Marchesa turned to IBM partner Inno360 to source a fabric for their creation. Using Inno360’s R&D platform – powered by a combination of seven Watson services – the team searched more than 40,000 sources for fabric information, narrowing down to 150 sources of the most useful options to consider for the dress.

From this selection, Inno360 worked in partnership with IBM Research-Almaden to identify printed and woven textiles that would respond well to the LED technology needed to execute the final part of the collaboration. Inno360 was then able to deliver 35 unique fabric recommendations based on a variety of criteria important to Marchesa, like weight, luminosity, and flexibility. From there, Marchesa weighed the benefits of different material compositions, weights and qualities to select the final fabric that suited the criteria for their dress and remained true to their brand.

Here’s what the dress looks like,

Courtesy of Marchesa Facebook page {https://www.facebook.com/MarchesaFashion/)

Courtesy of Marchesa Facebook page {https://www.facebook.com/MarchesaFashion/)

Watson is an artificial intelligence program,which I have written about a few times but I think this Feb. 28, 2011 posting (scroll down about 50% of the way), which mentions Watson, product placement, Jeopardy (tv quiz show), and medical diagnoses seems the most à propos given IBM’s latest product placement at the Met Gala.

Not the only ‘tech’ dress

There was at least one other ‘tech’ dress at the 2016 Met Gala, this one designed by Zac Posen and worn by Claire Danes. It did not receive a stellar review in a May 3, 2016 posting by Elaine Lui on Laineygossip.com,

People are losing their goddamn minds over this dress, by Zac Posen. Because it lights up.

It’s bullsh-t.

This is a BULLSH-T DRESS.

It’s Cinderella with a lamp shoved underneath her skirt.

Here’s a video of Danes and her dress at the Met Gala,

A Sept. 10, 2015 news item in People magazine indicates that Posen’s a different version of a ‘tech’ dress was a collaboration with Google (Note: Links have been removed),

Designer Zac Posen lit up his 2015 New York Fashion Week kickoff show on Tuesday by debuting a gorgeous and tech-savvy coded LED dress that blinked in different, dazzling pre-programmed patterns down the runway.

In coordination with Google’s non-profit organization, Made with Code, which inspires girls to pursue careers in tech coding, Posen teamed up with 30 girls (all between the ages of 13 and 18), who attended the show, to introduce the flashy dress — which was designed by Posen and coded by the young women.

“This is the future of the industry: mixing craft, fashion and technology,” the 34-year-old designer told PEOPLE. “There’s a discrepancy in the coding field, hardly any women are at the forefront, and that’s a real shame. If we can entice young women through the allure of fashion, to get them learning this language, why not?”

..

Through a micro controller, the gown displays coded patterns in 500 LED lights that are set to match the blues and yellows of Posen’s new collection. The circuit was designed and physically built into Posen’s dress fabric by 22-year-old up-and-coming fashion designer and computer science enthusiast, Maddy Maxey, who tells PEOPLE she was nervous watching Rocha [model Coco Rocha] make her way down the catwalk.

“It’s exactly as if she was carrying a microwave down the runway,” Maxey said. “It’s an entire circuit on a textile, so if one connection had come lose, the dress wouldn’t have worked. But, it did! And it was so deeply rewarding.”

Other ‘tech’ dresses

Back in 2009 I attended that year’s International Symposium on Electronic Arts and heard Clive van Heerden of Royal Philips Electronics talk about a number of innovative concepts including a ‘mood’ dress that would reveal the wearer’s emotions to whomever should glance their way. It was not a popular concept especially not in Japan where it was first tested.

The symposium also featured Maurits Waldemeyer who worked with fashion designer Chalayan Hussein and LED dresses and dresses that changed shape as the models went down the runway.

In 2010 there was a flurry of media interest in mood changing ‘smart’ clothes designed by researchers at Concordia University (Barbara Layne, Canada) and Goldsmiths College (Janis Jefferies, UK). Here’s more from a June 4, 2010 BBC news online item,

The clothes are connected to a database that analyses the data to work out a person’s emotional state.

Media, including songs, words and images, are then piped to the display and speakers in the clothes to calm a wearer or offer support.

Created as part of an artistic project called Wearable Absence the clothes are made from textiles woven with different sorts of wireless sensors. These can track a wide variety of tell-tale biological markers including temperature, heart rate, breathing and galvanic skin response.

Final comments

I don’t have anything grand to say. It is interesting to see the progression of ‘tech’ dresses from avant garde designers and academics to haute couture.

Call for papers: conference on sound art curation

It’s not exactly data sonification (my Feb. 7, 2014 posting about sound as a way to represent research data) but there’s a call for papers (deadline March 31, 2014) for a conference focused on curating sound art. Lanfranco Aceti, an academic, an artist and a curator whom I met some years ago at a conference sent me a March 20, 2014 announcement,

OCR (Operational and Curatorial Research in Art, Design, Science and Technology) is launching a series of international conferences with international partners.

Sound Art Curating is the first conference to take place in London, May 15-17, 2014 at Goldsmiths and at the Courtauld Institute of Art [both located in London, England].

The call for paper will close March 31, 2014 and it can be accessed at this link:
http://ocradst.org/blog/2014/01/25/histories-theories-and-practices-of-sound-art/

The conference website is available at this link: http://ocradst.org/soundartcurating/

I did get more information about the OCR from their About page,

Operational and Curatorial Research in Contemporary Art, Design, Science and Technology (OCR) is a research center that focuses on research in the fine arts. Its projects are characterized by elements of interdisciplinarity and transdiciplinarity. OCR engages with public and private institutions worldwide in order to foster innovation and best practices through collaborations and synergies.

OCR has two international outlets: the Media Exhibition Platform (MEP), a platform for peer reviewed exhibitions, and Contemporary Art and Culture (CAC), a peer-reviewed publishing platform for academic texts, artists’ books and catalogs.

Lanfranco Aceti is the founder and director of OCR, MEP and CAC, and has worked in the field for over twenty years.

Here’s more about what the organizers are looking for from the Call for Papers webpage,

Traditionally, the curator has been affiliated to the modern museum as the persona who manages an archive, and arranges and communicates knowledge to an audience, according to fields of expertise (art, archaeology, cultural or natural history etc.). However, in the later part of the 20th century the role of the curator changes – first on the art-scene and later in other more traditional institutions – into a more free-floating, organizational and ’constructive’ activity that allows the curator to create and design new wider relations, interpretations of knowledge modalities of communication and systems of dissemination to the wider public.

This shift is parallel to a changing role of the artist, that from producer becomes manager of its own archives, structures for displays, arrangements and recombinatory experiences that design interactive or analog journeys through sound artworks and soundscapes. Museums and galleries, following the impact of sound artworks in public spaces and media based festivals, become more receptive to aesthetic practices that deny the ‘direct visuality’ of the image and bypass, albeit partially, the need for material and tangible objects. Sound art and its related aesthetic practices re-design ways of seeing, imaging and recalling the visual in a context that is not sensory deprived but sensory alternative.

This is a call for studies into the histories, theories and practices of sound art production and sound art curating – where the creation is to be considered not solely that of a single material but of the entire sound art experience and performative elements.

We solicit and encourage submissions from practitioners and theoreticians on sound art and curating that explore and are linked to issues related to the following areas of interest:

  • Curating Interfaces for Sound + Archives
  • Methodologies of Sound Art Curating
  • Histories of Sound Art Curating
  • Theories of Sound Art Curating
  • Practices and Aesthetics of Sound Art
  • Sound in Performance
  • Sound in Relation to Visuals

Chairs: Lanfranco Aceti, Janis Jefferies, Morten Søndergaard and Julian Stallabrass

Conference Organizers: James Bulley, Jonathan Munro, Irene Noy and Ozden Sahin

The event is supported by LARM [Danish interdisciplinary radiophonic project; Note: website is mixed Danish and English language], Kasa Gallery, Goldsmiths, the Courtauld Institute of Art and Sabanci University.

With the participation and support of the Sonics research special interest group at Goldsmiths, chaired by Atau Tanaka and Julian Henriques.

The event is part of the Graduate Festival at Goldsmiths and the Graduate research projects at the Courtauld Institute of Art.

250 words abstract submissions. Please send your submissions to: info@ocradst.org

Deadline: March 31, 2014.

Good luck!