Tag Archives: NTT

Ishiguro’s robots and Swiss scientist question artificial intelligence at SXSW (South by Southwest) 2017

It seems unexpected to stumble across presentations on robots and on artificial intelligence at an entertainment conference such as South by South West (SXSW). Here’s why I thought so, from the SXSW Wikipedia entry (Note: Links have been removed),

South by Southwest (abbreviated as SXSW) is an annual conglomerate of film, interactive media, and music festivals and conferences that take place in mid-March in Austin, Texas, United States. It began in 1987, and has continued to grow in both scope and size every year. In 2011, the conference lasted for 10 days with SXSW Interactive lasting for 5 days, Music for 6 days, and Film running concurrently for 9 days.

Lifelike robots

The 2017 SXSW Interactive featured separate presentations by Japanese roboticist, Hiroshi Ishiguro (mentioned here a few times), and EPFL (École Polytechnique Fédérale de Lausanne; Switzerland) artificial intelligence expert, Marcel Salathé.

Ishiguro’s work is the subject of Harry McCracken’s March 14, 2017 article for Fast Company (Note: Links have been removed),

I’m sitting in the Japan Factory pavilion at SXSW in Austin, Texas, talking to two other attendees about whether human beings are more valuable than robots. I say that I believe human life to be uniquely precious, whereupon one of the others rebuts me by stating that humans allow cars to exist even though they kill humans.

It’s a reasonable point. But my fellow conventioneer has a bias: It’s a robot itself, with an ivory-colored, mask-like face and visible innards. So is the third participant in the conversation, a much more human automaton modeled on a Japanese woman and wearing a black-and-white blouse and a blue scarf.

We’re chatting as part of a demo of technologies developed by the robotics lab of Hiroshi Ishiguro, based at Osaka University, and Japanese telecommunications company NTT. Ishiguro has gained fame in the field by creating increasingly humanlike robots—that is, androids—with the ultimate goal of eliminating the uncanny valley that exists between people and robotic people.

I also caught up with Ishiguro himself at the conference—his second SXSW—to talk about his work. He’s a champion of the notion that people will respond best to robots who simulate humanity, thereby creating “a feeling of presence,” as he describes it. That gives him and his researchers a challenge that encompasses everything from technology to psychology. “Our approach is quite interdisciplinary,” he says, which is what prompted him to bring his work to SXSW.

A SXSW attendee talks about robots with two robots.

If you have the time, do read McCracken’t piece in its entirety.

You can find out more about the ‘uncanny valley’ in my March 10, 2011 posting about Ishiguro’s work if you scroll down about 70% of the way to find the ‘uncanny valley’ diagram and Masahiro Mori’s description of the concept he developed.

You can read more about Ishiguro and his colleague, Ryuichiro Higashinaka, on their SXSW biography page.

Artificial intelligence (AI)

In a March 15, 2017 EPFL press release by Hilary Sanctuary, scientist Marcel Salathé poses the question: Is Reliable Artificial Intelligence Possible?,

In the quest for reliable artificial intelligence, EPFL scientist Marcel Salathé argues that AI technology should be openly available. He will be discussing the topic at this year’s edition of South by South West on March 14th in Austin, Texas.

Will artificial intelligence (AI) change the nature of work? For EPFL theoretical biologist Marcel Salathé, the answer is invariably yes. To him, a more fundamental question that needs to be addressed is who owns that artificial intelligence?

“We have to hold AI accountable, and the only way to do this is to verify it for biases and make sure there is no deliberate misinformation,” says Salathé. “This is not possible if the AI is privatized.”

AI is both the algorithm and the data

So what exactly is AI? It is generally regarded as “intelligence exhibited by machines”. Today, it is highly task specific, specially designed to beat humans at strategic games like Chess and Go, or diagnose skin disease on par with doctors’ skills.

On a practical level, AI is implemented through what scientists call “machine learning”, which means using a computer to run specifically designed software that can be “trained”, i.e. process data with the help of algorithms and to correctly identify certain features from that data set. Like human cognition, AI learns by trial and error. Unlike humans, however, AI can process and recall large quantities of data, giving it a tremendous advantage over us.

Crucial to AI learning, therefore, is the underlying data. For Salathé, AI is defined by both the algorithm and the data, and as such, both should be publicly available.

Deep learning algorithms can be perturbed

Last year, Salathé created an algorithm to recognize plant diseases. With more than 50,000 photos of healthy and diseased plants in the database, the algorithm uses artificial intelligence to diagnose plant diseases with the help of your smartphone. As for human disease, a recent study by a Stanford Group on cancer showed that AI can be trained to recognize skin cancer slightly better than a group of doctors. The consequences are far-reaching: AI may one day diagnose our diseases instead of doctors. If so, will we really be able to trust its diagnosis?

These diagnostic tools use data sets of images to train and learn. But visual data sets can be perturbed that prevent deep learning algorithms from correctly classifying images. Deep neural networks are highly vulnerable to visual perturbations that are practically impossible to detect with the naked eye, yet causing the AI to misclassify images.

In future implementations of AI-assisted medical diagnostic tools, these perturbations pose a serious threat. More generally, the perturbations are real and may already be affecting the filtered information that reaches us every day. These vulnerabilities underscore the importance of certifying AI technology and monitoring its reliability.

h/t phys.org March 15, 2017 news item

As I noted earlier, these are not the kind of presentations you’d expect at an ‘entertainment’ festival.

Bending and twisting at Ceatac

CEATEC (Cutting Edge IT [Information Technology] and Electronics Comprehensive Exhibition) Japan, Oct.4-8, 2011 is a large technology fair being held in Chiba, near Tokyo. Some 800 companies are showcasing their latest and greatest according to the Oct. 4, 2011 news item on physorg.com,

Around 600 firms unveiled their innovations at the Combined Exhibition of Advanced Technologies (Ceatec) exhibition in Chiba, near Tokyo, expected to draw 200,000 visitors during its five-day run, organisers said.

The impact of Japan’s March 11 earthquake, tsunami and nuclear disaster gave added resonance to technologies on display, particularly those aimed at improving urban infrastructure and energy efficiency.

State-of-the-art radiation counters and power-saving technologies are in high demand after Japan’s disasters sparked fears over contamination and led to power shortages, requiring cuts to energy consumption this summer.

Japanese telecom giant NTT [Nippon Telegraph and Telephone] DoCoMo showed off a smartphone with changeable sensor-embedded shells that can detect bad breath, vital body signs and even be used to measure background radiation levels.

One item that particularly interested me is a transparent organic film from Murata Manufacturing. From the news item,

Electronics parts maker Murata Manufacturing unveiled devices using a newly developed transparent organic film that can deliver instructions via twisting motions or pressure.

One of its gadgets, a light-powered plate called the Leaf Grip Remote Controller, has no buttons but is instead operated by the user bending and twisting it.

Another application of the film is as a touch panel which responds to left-right and up-down finger swipes but also senses how strongly it is being pressed, unlike conventional touchscreen glass used on smartphones.

“Currently we give commands two-dimensionally on touch panels in smartphones and tablet computers but this invention would give us another dimension — how hard they are pressed,” Murata spokesman Kazuhisa Mashita said.

“This could enable users to scroll screens slowly by touching the screen lightly and move images faster by pressing it harder,” he told AFP [Agence France-Presse] ahead of the exhibition.

Earlier this year when CHI (computer-human interface) 2011 was taking place in Vancouver, Canada, I wrote about Roel Vertegaal and his team’s work on their PaperPhone and bending and twisting gestures (May 12, 2011 posting).

Bending and twisting a flexible screen doesn’t seem all that complicated but when you think about making those gestures meaningful,  i. e., ‘slowing a screen image by pressing more softly’, you realize just how much effort and thought are required for features, that if successful, will not be noticed.