Tag Archives: Cliff Kuang

A customized cruise experience with wearable technology (and decreased personal agency?)

The days when you went cruising to ‘get away from it all’ seem to have passed (if they ever really existed) with the introduction of wearable technology that will register your every preference and make life easier according to Cliff Kuang’s Oct. 19, 2017 article for Fast Company,

This month [October 2017], the 141,000-ton Regal Princess will push out to sea after a nine-figure revamp of mind-boggling scale. Passengers won’t be greeted by new restaurants, swimming pools, or onboard activities, but will instead step into a future augured by the likes of Netflix and Uber, where nearly everything is on demand and personally tailored. An ambitious new customization platform has been woven into the ship’s 19 passenger decks: some 7,000 onboard sensors and 4,000 “guest portals” (door-access panels and touch-screen TVs), all of them connected by 75 miles of internal cabling. As the Carnival-owned ship cruises to Nassau, Bahamas, and Grand Turk, its 3,500 passengers will have the option of carrying a quarter-size device, called the Ocean Medallion, which can be slipped into a pocket or worn on the wrist and is synced with a companion app.

The platform will provide a new level of service for passengers; the onboard sensors record their tastes and respond to their movements, and the app guides them around the ship and toward activities aligned with their preferences. Carnival plans to roll out the platform to another seven ships by January 2019. Eventually, the Ocean Medallion could be opening doors, ordering drinks, and scheduling activities for passengers on all 102 of Carnival’s vessels across 10 cruise lines, from the mass-market Princess ships to the legendary ocean liners of Cunard.

Kuang goes on to explain the reasoning behind this innovation,

The Ocean Medallion is Carnival’s attempt to address a problem that’s become increasingly vexing to the $35.5 billion cruise industry. Driven by economics, ships have exploded in size: In 1996, Carnival Destiny was the world’s largest cruise ship, carrying 2,600 passengers. Today, Royal Caribbean’s MS Harmony of the Seas carries up to 6,780 passengers and 2,300 crew. Larger ships expend less fuel per passenger; the money saved can then go to adding more amenities—which, in turn, are geared to attracting as many types of people as possible. Today on a typical ship you can do practically anything—from attending violin concertos to bungee jumping. And that’s just onboard. Most of a cruise is spent in port, where each day there are dozens of experiences available. This avalanche of choice can bury a passenger. It has also made personalized service harder to deliver. …

Kuang also wrote this brief description of how the technology works from the passenger’s perspective in an Oct. 19, 2017 item for Fast Company,

1. Pre-trip

On the web or on the app, you can book experiences, log your tastes and interests, and line up your days. That data powers the recommendations you’ll see. The Ocean Medallion arrives by mail and becomes the key to ship access.

2. Stateroom

When you draw near, your cabin-room door unlocks without swiping. The room’s unique 43-inch TV, which doubles as a touch screen, offers a range of Carnival’s bespoke travel shows. Whatever you watch is fed into your excursion suggestions.

3. Food

When you order something, sensors detect where you are, allowing your server to find you. Your allergies and preferences are also tracked, and shape the choices you’re offered. In all, the back-end data has 45,000 allergens tagged and manages 250,000 drink combinations.

4. Activities

The right algorithms can go beyond suggesting wines based on previous orders. Carnival is creating a massive semantic database, so if you like pricey reds, you’re more apt to be guided to a violin concerto than a limbo competition. Your onboard choices—the casino, the gym, the pool—inform your excursion recommendations.

In Kuang’s Oct. 19, 2017 article he notes that the cruise ship line is putting a lot of effort into retraining their staff and emphasizing the ‘soft’ skills that aren’t going to be found in this iteration of the technology. No mention is made of whether or not there will be reductions in the number of staff members on this cruise ship nor is the possibility that ‘soft’ skills may in the future be incorporated into this technological marvel.

Personalization/customization is increasingly everywhere

How do you feel about customized news feeds? As it turns out, this is not a rhetorical question as Adrienne LaFrance notes in her Oct. 19, 2017 article for The Atlantic (Note: Links have been removed),

Today, a Google search for news runs through the same algorithmic filtration system as any other Google search: A person’s individual search history, geographic location, and other demographic information affects what Google shows you. Exactly how your search results differ from any other person’s is a mystery, however. Not even the computer scientists who developed the algorithm could precisely reverse engineer it, given the fact that the same result can be achieved through numerous paths, and that ranking factors—deciding which results show up first—are constantly changing, as are the algorithms themselves.

We now get our news in real time, on demand, tailored to our interests, across multiple platforms, without knowing just how much is actually personalized. It was technology companies like Google and Facebook, not traditional newsrooms, that made it so. But news organizations are increasingly betting that offering personalized content can help them draw audiences to their sites—and keep them coming back.

Personalization extends beyond how and where news organizations meet their readers. Already, smartphone users can subscribe to push notifications for the specific coverage areas that interest them. On Facebook, users can decide—to some extent—which organizations’ stories they would like to appear in their news feeds. At the same time, devices and platforms that use machine learning to get to know their users will increasingly play a role in shaping ultra-personalized news products. Meanwhile, voice-activated artificially intelligent devices, such as Google Home and Amazon Echo, are poised to redefine the relationship between news consumers and the news [emphasis mine].

While news personalization can help people manage information overload by making individuals’ news diets unique, it also threatens to incite filter bubbles and, in turn, bias [emphasis mine]. This “creates a bit of an echo chamber,” says Judith Donath, author of The Social Machine: Designs for Living Online and a researcher affiliated with Harvard University ’s Berkman Klein Center for Internet and Society. “You get news that is designed to be palatable to you. It feeds into people’s appetite of expecting the news to be entertaining … [and] the desire to have news that’s reinforcing your beliefs, as opposed to teaching you about what’s happening in the world and helping you predict the future better.”

Still, algorithms have a place in responsible journalism. “An algorithm actually is the modern editorial tool,” says Tamar Charney, the managing editor of NPR One, the organization’s customizable mobile-listening app. A handcrafted hub for audio content from both local and national programs as well as podcasts from sources other than NPR, NPR One employs an algorithm to help populate users’ streams with content that is likely to interest them. But Charney assures there’s still a human hand involved: “The whole editorial vision of NPR One was to take the best of what humans do and take the best of what algorithms do and marry them together.” [emphasis mine]

The skimming and diving Charney describes sounds almost exactly like how Apple and Google approach their distributed-content platforms. With Apple News, users can decide which outlets and topics they are most interested in seeing, with Siri offering suggestions as the algorithm gets better at understanding your preferences. Siri now has have help from Safari. The personal assistant can now detect browser history and suggest news items based on what someone’s been looking at—for example, if someone is searching Safari for Reykjavík-related travel information, they will then see Iceland-related news on Apple News. But the For You view of Apple News isn’t 100 percent customizable, as it still spotlights top stories of the day, and trending stories that are popular with other users, alongside those curated just for you.

Similarly, with Google’s latest update to Google News, readers can scan fixed headlines, customize sidebars on the page to their core interests and location—and, of course, search. The latest redesign of Google News makes it look newsier than ever, and adds to many of the personalization features Google first introduced in 2010. There’s also a place where you can preprogram your own interests into the algorithm.

Google says this isn’t an attempt to supplant news organizations, nor is it inspired by them. The design is rather an embodiment of Google’s original ethos, the product manager for Google News Anand Paka says: “Just due to the deluge of information, users do want ways to control information overload. In other words, why should I read the news that I don’t care about?” [emphasis mine]

Meanwhile, in May [2017?], Google briefly tested a personalized search filter that would dip into its trove of data about users with personal Google and Gmail accounts and include results exclusively from their emails, photos, calendar items, and other personal data related to their query. [emphasis mine] The “personal” tab was supposedly “just an experiment,” a Google spokesperson said, and the option was temporarily removed, but seems to have rolled back out for many users as of August [2017?].

Now, Google, in seeking to settle a class-action lawsuit alleging that scanning emails to offer targeted ads amounts to illegal wiretapping, is promising that for the next three years it won’t use the content of its users’ emails to serve up targeted ads in Gmail. The move, which will go into effect at an unspecified date, doesn’t mean users won’t see ads, however. Google will continue to collect data from users’ search histories, YouTube, and Chrome browsing habits, and other activity.

The fear that personalization will encourage filter bubbles by narrowing the selection of stories is a valid one, especially considering that the average internet user or news consumer might not even be aware of such efforts. Elia Powers, an assistant professor of journalism and news media at Towson University in Maryland, studied the awareness of news personalization among students after he noticed those in his own classes didn’t seem to realize the extent to which Facebook and Google customized users’ results. “My sense is that they didn’t really understand … the role that people that were curating the algorithms [had], how influential that was. And they also didn’t understand that they could play a pretty active role on Facebook in telling Facebook what kinds of news they want them to show and how to prioritize [content] on Google,” he says.

The results of Powers’s study, which was published in Digital Journalism in February [2017], showed that the majority of students had no idea that algorithms were filtering the news content they saw on Facebook and Google. When asked if Facebook shows every news item, posted by organizations or people, in a users’ newsfeed, only 24 percent of those surveyed were aware that Facebook prioritizes certain posts and hides others. Similarly, only a quarter of respondents said Google search results would be different for two different people entering the same search terms at the same time. [emphasis mine; Note: Respondents in this study were students.]

This, of course, has implications beyond the classroom, says Powers: “People as news consumers need to be aware of what decisions are being made [for them], before they even open their news sites, by algorithms and the people behind them, and also be able to understand how they can counter the effects or maybe even turn off personalization or make tweaks to their feeds or their news sites so they take a more active role in actually seeing what they want to see in their feeds.”

On Google and Facebook, the algorithm that determines what you see is invisible. With voice-activated assistants, the algorithm suddenly has a persona. “We are being trained to have a relationship with the AI,” says Amy Webb, founder of the Future Today Institute and an adjunct professor at New York University Stern School of Business. “This is so much more catastrophically horrible for news organizations than the internet. At least with the internet, I have options. The voice ecosystem is not built that way. It’s being built so I just get the information I need in a pleasing way.”

LaFrance’s article is thoughtful and well worth reading in its entirety. Now, onto some commentary.

Loss of personal agency

I have been concerned for some time about the increasingly dull results I get from a Google search and while I realize the company has been gathering information about me via my searches , supposedly in service of giving me better searches, I had no idea how deeply the company can mine for personal data. It makes me wonder what would happen if Google and Facebook attempted a merger.

More cogently, I rather resent the search engines and artificial intelligence agents (e.g. Facebook bots) which have usurped my role as the arbiter of what interests me, in short, my increasing loss of personal agency.

I’m also deeply suspicious of what these companies are going to do with my data. Will it be used to manipulate me in some way? Presumably, the data will be sold and used for some purpose. In the US, they have married electoral data with consumer data as Brent Bambury notes in an Oct. 13, 2017 article for his CBC (Canadian Broadcasting Corporation) Radio show,

How much of your personal information circulates in the free-market ether of metadata? It could be more than you imagine, and it might be enough to let others change the way you vote.

A data firm that specializes in creating psychological profiles of voters claims to have up to 5,000 data points on 220 million Americans. Cambridge Analytica has deep ties to the American right and was hired by the campaigns of Ben Carson, Ted Cruz and Donald Trump.

During the U.S. election, CNN called them “Donald Trump’s mind readers” and his secret weapon.

David Carroll is a Professor at the Parsons School of Design in New York City. He is one of the millions of Americans profiled by Cambridge Analytica and he’s taking legal action to find out where the company gets its masses of data and how they use it to create their vaunted psychographic profiles of voters.

On Day 6 [Banbury’s CBC radio programme], he explained why that’s important.

“They claim to have figured out how to project our voting behavior based on our consumer behavior. So it’s important for citizens to be able to understand this because it would affect our ability to understand how we’re being targeted by campaigns and how the messages that we’re seeing on Facebook and television are being directed at us to manipulate us.” [emphasis mine]

The parent company of Cambridge Analytica, SCL Group, is a U.K.-based data operation with global ties to military and political activities. David Carroll says the potential for sharing personal data internationally is a cause for concern.

“It’s the first time that this kind of data is being collected and transferred across geographic boundaries,” he says.

But that also gives Carroll an opening for legal action. An individual has more rights to access their personal information in the U.K., so that’s where he’s launching his lawsuit.

Reports link Michael Flynn, briefly Trump’s National Security Adviser, to SCL Group and indicate that former White House strategist Steve Bannon is a board member of Cambridge Analytica. Billionaire Robert Mercer, who has underwritten Bannon’s Breitbart operations and is a major Trump donor, also has a significant stake in Cambridge Analytica.

In the world of data, Mercer’s credentials are impeccable.

“He is an important contributor to the field of artificial intelligence,” says David Carroll.

“His work at IBM is seminal and really important in terms of the foundational ideas that go into big data analytics, so the relationship between AI and big data analytics. …

Banbury’s piece offers a lot more, including embedded videos, than I’ve not included in that excerpt but I also wanted to include some material from Carole Cadwalladr’s Oct. 1, 2017 Guardian article about Carroll and his legal fight in the UK,

“There are so many disturbing aspects to this. One of the things that really troubles me is how the company can buy anonymous data completely legally from all these different sources, but as soon as it attaches it to voter files, you are re-identified. It means that every privacy policy we have ignored in our use of technology is a broken promise. It would be one thing if this information stayed in the US, if it was an American company and it only did voter data stuff.”

But, he [Carroll] argues, “it’s not just a US company and it’s not just a civilian company”. Instead, he says, it has ties with the military through SCL – “and it doesn’t just do voter targeting”. Carroll has provided information to the Senate intelligence committee and believes that the disclosures mandated by a British court could provide evidence helpful to investigators.

Frank Pasquale, a law professor at the University of Maryland, author of The Black Box Society and a leading expert on big data and the law, called the case a “watershed moment”.

“It really is a David and Goliath fight and I think it will be the model for other citizens’ actions against other big corporations. I think we will look back and see it as a really significant case in terms of the future of algorithmic accountability and data protection. …

Nobody is discussing personal agency directly but if you’re only being exposed to certain kinds of messages then your personal agency has been taken from you. Admittedly we don’t have complete personal agency in our lives but AI along with the data gathering done online and increasingly with wearable and smart technology means that another layer of control has been added to your life and it is largely invisible. After all, the students in Elia Powers’ study didn’t realize their news feeds were being pre-curated.

Digital world and the Cleveland Museum of Art

If this project is as advertised, then the Cleveland Museum of Art has developed a truly exciting interactive experience. Cliff Kuang in his Mar. 6, 2013 article for Fast Company is definitely enthusiastic,

If you’re a youngster, why stare at a Greek urn when you could blow one up in a video game? One institution thinking deeply about the challenge is the Cleveland Museum of Art, which this month unveiled a series of revamped galleries, designed by Local Projects, which feature cutting-edge interactivity. But the technology isn’t the point. “We didn’t want to create a tech ghetto,” says David Franklin, the museum’s director. Adds Local Projects founder Jake Barton, “We wanted to make the tech predicated on the art itself.”

Put another way, the new galleries at CMA tackle the problem plaguing most ambitious UI projects today: How do you let the content shine, and get the tech out of the way? How do you craft an interaction between bytes and spaces that feels fun?

The Cleveland Museum of Art’s Jan. 14, 2013 news release describes the new project,

… Gallery One, a unique, interactive gallery that blends art, technology and interpretation to inspire visitors to explore the museum’s renowned collections. This revolutionary space features the largest multi-touch screen in the United States, which displays images of over 3,500 objects from the museum’s world-renowned permanent collection. This 40-foot Collection Wall allows visitors to shape their own tours of the museum and to discover the full breadth of the collections on view throughout the museum’s galleries.

Throughout the space, original works of art and digital interactives engage visitors in new ways, putting curiosity, imagination and creativity at the heart of their museum experience. Innovative user-interface design and cutting-edge hardware developed exclusively for Gallery One break new ground in art museum interpretation, design and technology.

“Technology is a vital tool for supporting visitor engagement with the collection,” adds C. Griffith Mann, Deputy Director and Chief Curator. “Putting the art experience first required an unprecedented partnership between the museum’s curatorial, design, education and technology staff.”

Comprised of three major areas, Gallery One offers something for visitors of all ages and levels of comfort with art. Studio Play is a bright and colorful space that offers the museum’s youngest visitors and their families a chance to play and learn about art. Highlights of this portion of Gallery One include: Line and Shape, a multi-touch, microtile wall on which visitors can draw lines that are matched to works of art in the collection; a shadow-puppet theater where silhouettes of objects can be used as “actors” in plays; mobile- and sculpture-building stations where visitors can create their own interpretations of modern sculptures by Calder [Alexander Calder] and Lipchitz [Jacques Lipchitz]; and a sorting and matching game featuring works from the permanent collection. This space is designed to encourage visitors of all ages to become active participants in their museum experience.

In the main gallery space, visitors have an opportunity to learn about the collection and to develop ways of looking at art that are both fun and educational. The gallery is comprised of fourteen themed groups of works from the museum’s collection, six of which have “lens” stations. The “lens” stations comprise 46” multi-touch screens that offer additional contextual information and dynamic, interactive activities that allow visitors to create experiences and share them with others through links to social media. Another unique feature of the space is the Beacon, an introductory, dynamic screen that displays real-time results of visitors’ activities in the space, such as favorite objects, tours and activities.

The largest multi-touch screen in the United States, the Collection Wall utilizes innovative technology to allow visitors to browse these works of art on the Wall, facilitating discovery and dialogue with other visitors. The Collection Wall can also serve as an orientation experience, allowing visitors to download existing tours or curate their own tours to take out into the galleries on iPads. The Collection Wall, as well as the other interactive in the gallery, illustrates the museum’s long-term investment in technology to enhance visitor access to factual and interpretative information about the permanent collection.

“The Collection Wall powerfully demonstrates how cutting-edge technology can inspire our visitors to engage with our collection in playful and original ways never before seen on this scale,” said Jane Alexander, Director of Information Management and Technology Services. “This space, unique among art museums internationally, will help make the Cleveland Museum of Art a destination museum.”

In concert with the opening of Gallery One, the museum has also created ArtLens, a multi-dimensional app for iPads. Utilizing image recognition software, visitors can scan two-dimensional objects in Gallery One and throughout the museum’s galleries to access up to 9 hours of additional multimedia content, including audio tour segments, videos and additional contextual information. Indoor triangulation-location technology also allows visitors to orient themselves in the galleries and find works of art with additional interpretive content throughout their visit.

“ArtLens allows the visitor to take the experience of Gallery One out in to the other areas of the museum,” said Caroline Goeser. “It brings in many voices and traditions from different cultures, as well as giving visitors a chance to see demonstrations of art making techniques by local artists. The content is layered so visitors can choose what interests them and discover new ways of looking at and interpreting the object. Their experience is guided by their own sense of curiosity and discovery.”

It’s interesting to note the companies that partnered with the museum and to note the source for the money supporting this effort (from the news release),

The museum partnered with several other companies to complete the project, including Local Projects (media design and development), Gallagher and Associates (design and development), Zenith (AV Integration), Piction (CMS/DAM development), Earprint Productions (app content development), and Navizon (way-finding).

Gallery One is generously supported by the Maltz Family Foundation, which donated $10 million to support the project. Additional support for the project comes from grants and other donations.

Kuang’s article makes the exhibits come alive,

The first gallery that many new visitors will see, Gallery One, is a signature space, meant to draw in a younger crowd. To that end, the exhibits are about fostering an intuitive understanding of the art. Which sounds like baloney, but the end results are quietly terrific. At the root, the exhibits encourage people to move, fostering a connection to the art that’s literally written on the body:

  • In one display, a computer analyzes the expression on a visitor’s face. Then, they can see work spanning thousands of years that matches their own visage.
  • Gallery One also offers a chance to directly experience the physical decisions behind how masterpieces are made. For example, in front of a Jackson Pollack painting is a virtual easel, loaded with tools that approximate Pollock’s own, so that visitors can pour their own drip painting and compare it to the real thing.

Sounds like very exciting stuff. For anyone who can’t visit the exhibit, there are videos including this one where visitors strike a pose and an image (from the collection) mimicking the pose appears {ETA Mar.6.13 4:35 pm PST: I got this the wrong way round, the museum presents you with a piece of art and you strike the same p0se),

Sculpture Lens – Strike A Pose – Cleveland Museum of Art from Local Projects on Vimeo.

Kuang covers that exhibit and much more in his article, which I strongly recommend reading, and he makes this point,

Even as the designers go wild with the technology, they never stop to consider what anyone who doesn’t care about that technology would stand to gain. It was Barton’s [Local Projects founder Jake Barton] own skepticism about technology that made the technology great. His team didn’t necessarily believe that high-tech flare would add value to the museum experience. So they strove to look past the technology.

As a technical writer, I had many, many arguments with developers about precisely that point; most of us don’t care about the technology.  So, kudos to Jake Barton and all of the teams responsible for finding a way to integrate that understanding into a series of exhibits that allow the museum to showcase its collection, engage the public, and develop new audiences.

Meanwhile, the Council of Canadian Academies is poised to embark on an assessment which examines museums and other memory institutions along with digital technology from an entirely different perspective, Memory Institutions and the Digital Revolution,

Library and Archives Canada has asked the Council of Canadian Academies to assess how memory institutions, which includes archives, libraries, museums, and other cultural institutions, can embrace the opportunities and challenges of the changing ways in which Canadians are communicating and working in the digital age.

These trends present both significant challenges and opportunities for traditional memory institutions as they work towards ensuring that valuable information is safeguarded and maintained for the long term and for the benefit of future generations. It requires that they keep track of new types of records that may be of future cultural significance, and of any changes in how decisions are being documented. As part of this assessment, the Council’s expert panel will examine the evidence as it relates to emerging trends, international best practices in archiving, and strengths and weaknesses in how Canada’s memory institutions are responding to these opportunities and challenges. Once complete, this assessment will provide an in-depth and balanced report that will support Library and Archives Canada and other memory institutions as it considers how best to manage and preserve the mass quantity of communications records generated as a result of new and emerging technologies.

I last mentioned the ‘memory institutions’ assessment in my Feb. 22, 2013 posting in the context of their ‘science culture in Canada’ assessment panel. I find it odd that the Canada Science and Technology Museums Corporation was one of the requestors for the ‘science culture’ assessment but it  is not involved (nor is any other museum) in the ‘memory institutions and digital revolution’ assessment.

After reading about the Cleveland Museum of Art project, something else strikes me as odd, there is no mention of analysing the role that museums, libraries, and others will play in a world which is increasingly ephemeral. After all, it’s not enough to keep and store records. There is no point  if we can’t access them or even have knowledge of their existence. As for storing and displaying objects, this traditional museum function is increasingly being made impossible as objects seemingly disappear. The vinyl record, cassette tape, and CD (compact disc) have almost disappeared to be replaced by digital files. Meanwhile, my local library has fewer and fewer books, DVDs, and other lending items. What roles will libraries, museums, and other memory institutions going to have in our lives?

Interacting with stories and/or with data

A researcher, Ivo Swarties, at the University of Twente in The Netherlands is developing a means of allowing viewers to enter into a story (via avatar) and affect the plotline in what seems like a combination of what you’d see in 2nd Life and gaming. The project also brings to mind The Diamond Age by Neal Stephenson and its intelligent nanotechnology-enabled book along with Stephenson’s latest publishing project, Mongoliad (which I blogged about here).

The article about Swarties’ project on physorg.com by Rianne Wanders goes on to note,

The ‘Virtual Storyteller’, developed by Ivo Swartjes of the University of Twente, is a computer-controlled system that generates stories automatically. Soon it will be possible for you as a player to take on the role of a character and ‘step inside’ the story, which then unfolds on the basis of what you as a player do. In the gaming world there are already ‘branching storylines’ in which the gamer can influence the development of a story, but Swartjes’ new system goes a step further. [emphasis mine]The world of the story is populated with various virtual figures, each with their own emotions, plans and goals. ‘Rules’ drawn up in advance determine the characters’ behaviour, and the story comes about as the different characters interact.

There’s a video with the article if you want to see this project for yourself.

On another related front, Cliff Kuang profiles in an article (The Genius Behind Minority Report’s Interfaces Resurfaces, With Mind-blowing New Tech) on the Fast Company site describes a new human-computer interface. This story provides a contrast to the one about the ‘Virtual Storyteller’ because this time you don’t have to become an avatar to interact with the content. From the article,

It’s a cliche to say that Minority Report-style interfaces are just around the corner. But not when John Underkoffler [founder of Oblong Industries] is involved. As tech advistor on the film, he was the guy whose work actually inspired the interfaces that Tom Cruise used. The real-life system he’s been developing, called g-speak, is unbelievable.

Oblong hasn’t previously revealed most of the features you see in the later half of the video [available in the article’s web page or on YouTube], including the ability zoom in and fly through a virtual, 3-D image environment (6:30); the ability to navigate an SQL database in 3-D (8:40); the gestural wand that lets you manipulate and disassemble 3-D models (10:00); and the stunning movie-editing system, called Tamper (11:00).

Do go see the video. At one point, Underkoffler (who was speaking at the February 2010 TED) drags data from the big screen in front of him onto a table set up on the stage where he’s speaking.

Perhaps most shockingly (at least for me) was the information that this interface is already in use commercially (probably in a limited way).

These developments and many others suggest that the printed word’s primacy is seriously on the wane, something I first heard 20 years ago. Oftentimes when ideas about how technology will affect us are discussed, there’s a kind of hysterical reaction which is remarkably similar across at least two centuries. Dave Bruggeman at his Pasco Phronesis blog has a posting about the similarities between Twitter and 19th century diaries,

Lee Humphreys, a Cornell University communications professor, has reviewed several 18th and 19th century diaries as background to her ongoing work in classifying Twitter output (H/T Futurity). These were relatively small journals, necessitating short messages. And those messages bear a resemblance to the kinds of Twitter messages that focus on what people are doing (as opposed to the messages where people are reacting to things).

Dave goes on to recommend The Shock of the Old; Technology and Global History since 1900 by David Edgerton as an antidote to our general ignorance (from the book’s web page),

Edgerton offers a startling new and fresh way of thinking about the history of technology, radically revising our ideas about the interaction of technology and society in the past and in the present.

I’d also recommend Carolyn Marvin’s book, When old technologies were new, where she discusses the introduction of telecommunications technology and includes the electric light with these then new technologies (telegraph and telephone). She includes cautionary commentary from the newspapers, magazines, and books of the day which is remarkably similar to what’s available in our contemporary media environment.

Adding a little more fuel is Stephen Hume in a June 12, 2010 article about Shakespeare for the Vancouver Sun who asks,

But is the Bard relevant in an age of atom bombs; a world of instant communication gratified by movies based on comic books, sex-saturated graphic novels, gory video games, the television soaps and the hip tsunami of fan fiction that swashes around the Internet?

[and answers]

So, the Bard may be stereotyped as the bane of high school students, symbol of snooty, barely comprehensible language, disparaged as sexist, racist, anti-Semitic, representative of an age in which men wore tights and silly codpieces to inflate their egos, but Shakespeare trumps his critics by remaining unassailably popular.

His plays have been performed on every continent in every major language. He’s been produced as classic opera in China; as traditional kabuki in Japan. He’s been enthusiastically embraced and sparked an artistic renaissance in South Asia. In St. Petersburg, Russia, there can be a dozen Shakespeare plays running simultaneously. Shakespeare festivals occur in Austria, Belgium, Finland, Portugal, Sweden and Turkey, to list but a few.

Yes to Pasco Phronesis, David Edgerton, Carolyn Marvin, and Stephen Hume, I agree that we have much  in common with our ancestors but there are also some profound and subtle differences not easily articulated.  I suspect that if time travel were possible and we could visit Shakespeare’s time we would find that the basic human experience doesn’t change that much but that we would be hardpressed to fit into that society as our ideas wouldn’t just be outlandish they would be unthinkable. I mean literally unthinkable.

As Walter Ong noted in his book, Orality and Literacy, the concept of a certain type of list is a product of literacy. Have you ever done that test where you pick out the item that doesn’t belong on the list? Try: hammer, saw, nails, tree. The correct answer anybody knows is tree since it’s not a tool. However, someone from oral culture would view the exclusion of the tree as crazy since you need both tools and  wood to build something and clearly the tree provides wood. (I’ll see if I can find the citation in Ong’s book as he provides research to prove his point.) A list is a particular way of organizing information and thinking about it.

Reimagining prosthetic arms; touchable holograms and brief thoughts on multimodal science communication; and nanoscience conference in Seattle

Reimagining the prosthetic arm, an article by Cliff Kuang in Fast Company (here) highlights a student design project at New York’s School of Visual Arts. Students were asked to improve prosthetic arms and were given four categories: decorative, playful, utilitarian, and awareness. This one by Tonya Douraghey and Carli Pierce caught my fancy, after all, who hasn’t thought of growing wings? (Rrom the Fast Company website),

Feathered cuff and wing arm

Feathered cuff and wing arm

I suggest reading Kuang’s article before heading off to the project website to see more student projects.

At the end of yesterday’s posting about MICA and multidimensional data visualization in spaces with up to 12 dimensions (here)  in virtual worlds such as Second Life, I made a comment about multimodal discourse which is something I think will become increasingly important. I’m not sure I can imagine 12 dimensions but I don’t expect that our usual means of visualizing or understanding data are going to be sufficient for the task. Consequently, I’ve been noticing more projects which engage some of our other senses, notably touch. For example, the SIGGRAPH 2009 conference in New Orleans featured a hologram that you can touch. This is another article by Cliff Kuang in Fast Company, Holograms that you can touch and feel. For anyone unfamiliar with SIGGRAPH, the show has introduced a number of important innovations, notably, clickable icons. It’s hard to believe but there was a time when everything was done by keyboard.

My August newsletter from NISE Net (Nanoscale Informal Science Education Network) brings news of a conference in Seattle, WA at the Pacific Science Centre, Sept. 8 – 11, 2009. It will feature (from the NISE Net blog),

Members of the NISE Net Program group and faculty and students at the Center for Nanotechnology in Society at Arizona State University are teaming up to demonstrate and discuss potential collaborations between the social science community and the informal science education community at a conference of the Society for the Study of Nanoscience and Emerging Technologies in Seattle in early September.

There’s more at the NISE Net blog here including a link to the conference site. (I gather the Society for the Study of Nanoscience and Emerging Nanotechnologies is in its very early stages of organizing so this is a fairly informal call for registrants.)

The NISE Net nano haiku this month is,

Nanoparticles

Surface plasmon resonance
Silver looks yellow

by Dr. Katie D. Cadwell of the University of Wisconsin-Madison MRSEC.

Have a nice weekend!

Visualizing innovation and the ACS’s second nanotube contest

I’ve found more material on visualizing data, this time the data is about innovation. An article by Cliff Kuang in Fast Company comments on the WAINOVA (World Alliance for Innovation) and its interactive atlas of innovation. From the article,

Bestario, a Spanish infographics firm, designs Web sites that attempt to find new relationships in a teeming mass of data. Sometimes, the results are interesting, as examples, if nothing else, of data porn; other times, it’s merely confounding. Its new project is a great deal easier to explain: The Wainova World Atlas of Innovation attempts to map the world’s major science and business incubators, as well as the professional associations linking them.

Kuang goes on to point out some of the difficulties associated with visualizing data when you get beyond using bar graphs and pie charts. The atlas can be found here on the WAINOVA site. If you’re interested in looking at more data visualization projects, you can check out the infosthetics site mentioned in Kuang’s article.

Rob Annan at the Don’t leave Canada behind blog has picked up on an article in the Financial Post which, based on an American Express survey, states that Canadian business is being very innovative despite the economic downturn. You can read Annan’s comments and get a link to the Financial Post article here. As for my take on it all, I concede that it takes nerve to keep investing in your business when everything is so uncertain but I agree with Annan (if I may take the liberty of rephrasing his comment slightly) there’s no real innovation in the examples  given in the Financial Post article.

The American Chemical Society (ACS) has announced its second nano video contest. From the announcement on Azonano,

In our last video contest “What is Nano?”, you showed us that nano is a way of making things smaller, lighter and more efficient, making it possible to build better machines, solar cells, materials and radios. But another question remains: how exactly is “nano” going to impact both us and the world? We want you to think big about nano and show us how nano will address the challenges we face today.

The contest is being run by ACS Nanotation NanoTube. There’s a cash prize of $500USD and submissions must be made between July 6, 2009 and August 9, 2009.  (Sorry, I kept forgetting to put this up.) You must be a registered user to make a submission but registration is free here. The Nano Song (complete with puppets!) that was making the rounds a few months ago was a video submission for the first contest.

Elsevier has announced a new project, the Article of the Future. The beta site is here. From the announcement on Nanowerk News,

Elsevier, a leading publisher of scientific, technical and medical information products and services, today announces the ‘Article of the Future’ project, an ongoing collaboration with the scientific community to redefine how a scientific article is presented online. The project takes full advantage of online capabilities, allowing readers individualized entry points and routes through content, while exploiting the latest advances in visualization techniques.

Yes, it’s back to visualization and, eventually, multimodal discourse analysis and one of the big questions (for me) how is all this visualizing of data going to affect our knowledge? More tomorrow.