Tag Archives: Osaka University

Dessert or computer screen?

Scientists at Japan’s University of Osaka have a technique for creating higher resolution computer and smart phone screens from the main ingredient for a dessert, nata de coco. From the nata de coco Wikipedia entry (Note: Links have been removed),

Nata de coco (also marketed as “coconut gel”) is a chewy, translucent, jelly-like food produced by the fermentation of coconut water,[1] which gels through the production of microbial cellulose by ‘Komagataeibacter xylinus’. Originating in the Philippines, nata de coco is most commonly sweetened as a candy or dessert, and can accompany a variety of foods, including pickles, drinks, ice cream, puddings, and fruit cocktails.[2]

An April 18, 2018 news item on Nanowerk announces the research (Note: A link has been removed),

A team at the Institute of Scientific and Industrial Research at Osaka University has determined the optical parameters of cellulose molecules with unprecedented precision. They found that cellulose’s intrinsic birefringence, which describes how a material reacts differently to light of various orientations, is powerful enough to be used in optical displays, such as flexible screens or electronic paper (ACS Macro Letters, “Estimation of the Intrinsic Birefringence of Cellulose Using Bacterial Cellulose Nanofiber Films”

An April 18, 2019 Osaka University press release on AlphaGalileo, which originated the news release, provides some historical context for the use of cellulose along with additional detail about the research,

Cellulose is an ancient material that may be poised for a major comeback. It has been utilized for millennia as the primary component of paper books, cotton clothing, and nata de coco, a tropical dessert made from coconut water. While books made of dead trees and plain old shirts might seem passé in world increasingly filled with tablets and smartphones, researchers at Osaka University have shown that cellulose might have just what it takes to make our modern electronic screens cheaper and provide sharper, more vibrant images.

Cellulose, a naturally occurring polymer, consists of many long molecular chains. Because of its rigidity and strength, cellulose helps maintain the structural integrity of the cell walls in plants. It makes up about 99% of the nanofibers that comprise nata de coco, and helps create its unique and tasty texture.

The team at Osaka University achieved better results using unidirectionally-aligned cellulose nanofiber films created by stretching hydrogels from nata de coco at various rates. Nata de coco nanofibers allow the cellulose chains to be straight on the molecular level, and this is helpful for the precise determination of the intrinsic birefringence–that is, the maximum birefringence of fully extended polymer chains. The researchers were also able to measure the birefringence more accurately through improvements in method. “Using high quality samples and methods, we were able to reliably determine the inherent birefringence of cellulose, for which very different values had been previously estimated,” says senior author Masaya Nogi.

The main application the researchers envision is as light compensation films for liquid crystal displays (LCDs), since they operate by controlling the brightness of pixels with filters that allow only one orientation of light to pass through. Potentially, any smartphone, computer, or television that has an LCD screen could see improved contrast, along with reduced color unevenness and light leakage with the addition of cellulose nanofiber films.

“Cellulose nanofibers are promising light compensation materials for optoelectronics, such as flexible displays and electronic paper, since they simultaneously have good transparency, flexibility, dimensional stability, and thermal conductivity,” says lead author Kojiro Uetani. “So look for this ancient material in your future high-tech devices.”

Here’s a link to and a citation for the paper,

Estimation of the Intrinsic Birefringence of Cellulose Using Bacterial Cellulose Nanofiber Films by Kojiro Uetani, Hirotaka Koga, and Masaya Nogi. ACS Macro Lett., 2019, 8 (3), pp 250–254 DOI: 10.1021/acsmacrolett.9b00024 Publication Date (Web): February 22, 2019 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

A potpourri of robot/AI stories: killers , kindergarten teachers, a Balenciaga-inspired AI fashion designer, a conversational android, and more

Following on my August 29, 2018 post (Sexbots, sexbot ethics, families, and marriage), I’m following up with a more general piece.

Robots, AI (artificial intelligence), and androids (humanoid robots), the terms can be confusing since there’s a tendency to use them interchangeably. Confession: I do it too, but, not this time. That said, I have multiple news bits.

Killer ‘bots and ethics

The U.S. military is already testing a Modular Advanced Armed Robotic System. Credit: Lance Cpl. Julien Rodarte, U.S. Marine Corps

That is a robot.

For the purposes of this posting, a robot is a piece of hardware which may or may not include an AI system and does not mimic a human or other biological organism such that you might, under circumstances, mistake the robot for a biological organism.

As for what precipitated this feature (in part), it seems there’s been a United Nations meeting in Geneva, Switzerland held from August 27 – 31, 2018 about war and the use of autonomous robots, i.e., robots equipped with AI systems and designed for independent action. BTW, it’s the not first meeting the UN has held on this topic.

Bonnie Docherty, lecturer on law and associate director of armed conflict and civilian protection, international human rights clinic, Harvard Law School, has written an August 21, 2018 essay on The Conversation (also on phys.org) describing the history and the current rules around the conduct of war, as well as, outlining the issues with the military use of autonomous robots (Note: Links have been removed),

When drafting a treaty on the laws of war at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language.

This standard, known as the Martens Clause, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”

I was the lead author of a new report by Human Rights Watch and the Harvard Law School International Human Rights Clinic that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these weapons.

Representatives of more than 70 nations will gather from August 27 to 31 [2018] at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the Convention on Conventional Weapons, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.

Docherty elaborates on her points (Note: A link has been removed),

The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.

Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are all working to develop them. They argue that the technology would process information faster and keep soldiers off the battlefield.

The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.

I encourage you to read the essay in its entirety and for anyone who thinks the discussion about ethics and killer ‘bots is new or limited to military use, there’s my July 25, 2016 posting about police use of a robot in Dallas, Texas. (I imagine the discussion predates 2016 but that’s the earliest instance I have here.)

Teacher bots

Robots come in many forms and this one is on the humanoid end of the spectum,

Children watch a Keeko robot at the Yiswind Institute of Multicultural Education in Beijing, where the intelligent machines are telling stories and challenging kids with logic problems  [donwloaded from https://phys.org/news/2018-08-robot-teachers-invade-chinese-kindergartens.html]

Don’t those ‘eyes’ look almost heart-shaped? No wonder the kids love these robots, if an August  29, 2018 news item on phys.org can be believed,

The Chinese kindergarten children giggled as they worked to solve puzzles assigned by their new teaching assistant: a roundish, short educator with a screen for a face.

Just under 60 centimetres (two feet) high, the autonomous robot named Keeko has been a hit in several kindergartens, telling stories and challenging children with logic problems.

Round and white with a tubby body, the armless robot zips around on tiny wheels, its inbuilt cameras doubling up both as navigational sensors and a front-facing camera allowing users to record video journals.

In China, robots are being developed to deliver groceries, provide companionship to the elderly, dispense legal advice and now, as Keeko’s creators hope, join the ranks of educators.

At the Yiswind Institute of Multicultural Education on the outskirts of Beijing, the children have been tasked to help a prince find his way through a desert—by putting together square mats that represent a path taken by the robot—part storytelling and part problem-solving.

Each time they get an answer right, the device reacts with delight, its face flashing heart-shaped eyes.

“Education today is no longer a one-way street, where the teacher teaches and students just learn,” said Candy Xiong, a teacher trained in early childhood education who now works with Keeko Robot Xiamen Technology as a trainer.

“When children see Keeko with its round head and body, it looks adorable and children love it. So when they see Keeko, they almost instantly take to it,” she added.

Keeko robots have entered more than 600 kindergartens across the country with its makers hoping to expand into Greater China and Southeast Asia.

Beijing has invested money and manpower in developing artificial intelligence as part of its “Made in China 2025” plan, with a Chinese firm last year unveiling the country’s first human-like robot that can hold simple conversations and make facial expressions.

According to the International Federation of Robots, China has the world’s top industrial robot stock, with some 340,000 units in factories across the country engaged in manufacturing and the automotive industry.

Moving on from hardware/software to a software only story.

AI fashion designer better than Balenciaga?

Despite the title for Katharine Schwab’s August 22, 2018 article for Fast Company, I don’t think this AI designer is better than Balenciaga but from the pictures I’ve seen the designs are as good and it does present some intriguing possibilities courtesy of its neural network (Note: Links have been removed),

The AI, created by researcher Robbie Barat, has created an entire collection based on Balenciaga’s previous styles. There’s a fabulous pink and red gradient jumpsuit that wraps all the way around the model’s feet–like a onesie for fashionistas–paired with a dark slouchy coat. There’s a textural color-blocked dress, paired with aqua-green tights. And for menswear, there’s a multi-colored, shimmery button-up with skinny jeans and mismatched shoes. None of these looks would be out of place on the runway.

To create the styles, Barat collected images of Balenciaga’s designs via the designer’s lookbooks, ad campaigns, runway shows, and online catalog over the last two months, and then used them to train the pix2pix neural net. While some of the images closely resemble humans wearing fashionable clothes, many others are a bit off–some models are missing distinct limbs, and don’t get me started on how creepy [emphasis mine] their faces are. Even if the outfits aren’t quite ready to be fabricated, Barat thinks that designers could potentially use a tool like this to find inspiration. Because it’s not constrained by human taste, style, and history, the AI comes up with designs that may never occur to a person. “I love how the network doesn’t really understand or care about symmetry,” Barat writes on Twitter.

You can see the ‘creepy’ faces and some of the designs here,

Image: Robbie Barat

In contrast to the previous two stories, this all about algorithms, no machinery with independent movement (robot hardware) needed.

Conversational android: Erica

Hiroshi Ishiguro and his lifelike (definitely humanoid) robots have featured here many, many times before. The most recent posting is a March 27, 2017 posting about his and his android’s participation at the 2017 SXSW festival.

His latest work is featured in an August 21, 2018 news news item on ScienceDaily,

We’ve all tried talking with devices, and in some cases they talk back. But, it’s a far cry from having a conversation with a real person.

Now a research team from Kyoto University, Osaka University, and the Advanced Telecommunications Research Institute, or ATR, have significantly upgraded the interaction system for conversational android ERICA, giving her even greater dialog skills.

ERICA is an android created by Hiroshi Ishiguro of Osaka University and ATR, specifically designed for natural conversation through incorporation of human-like facial expressions and gestures. The research team demonstrated the updates during a symposium at the National Museum of Emerging Science in Tokyo.

Here’s the latest conversational android, Erica

Caption: The experimental set up when the subject (left) talks with ERICA (right) Credit: Kyoto University / Kawahara lab

An August 20, 2018 Kyoto University press release on EurekAlert, which originated the news item, offers more details,

When we talk to one another, it’s never a simple back and forward progression of information,” states Tatsuya Kawahara of Kyoto University’s Graduate School of Informatics, and an expert in speech and audio processing.

“Listening is active. We express agreement by nodding or saying ‘uh-huh’ to maintain the momentum of conversation. This is called ‘backchanneling’, and is something we wanted to implement with ERICA.”

The team also focused on developing a system for ‘attentive listening’. This is when a listener asks elaborating questions, or repeats the last word of the speaker’s sentence, allowing for more engaging dialogue.

Deploying a series of distance sensors, facial recognition cameras, and microphone arrays, the team began collecting data on parameters necessary for a fluid dialog between ERICA and a human subject.

“We looked at three qualities when studying backchanneling,” continues Kawahara. “These were: timing — when a response happens; lexical form — what is being said; and prosody, or how the response happens.”

Responses were generated through machine learning using a counseling dialogue corpus, resulting in dramatically improved dialog engagement. Testing in five-minute sessions with a human subject, ERICA demonstrated significantly more dynamic speaking skill, including the use of backchanneling, partial repeats, and statement assessments.

“Making a human-like conversational robot is a major challenge,” states Kawahara. “This project reveals how much complexity there is in listening, which we might consider mundane. We are getting closer to a day where a robot can pass a Total Turing Test.”

Erica seems to have been first introduced publicly in Spring 2017, from an April 2017 Erica: Man Made webpage on The Guardian website,

Erica is 23. She has a beautiful, neutral face and speaks with a synthesised voice. She has a degree of autonomy – but can’t move her hands yet. Hiroshi Ishiguro is her ‘father’ and the bad boy of Japanese robotics. Together they will redefine what it means to be human and reveal that the future is closer than we might think.

Hiroshi Ishiguro and his colleague Dylan Glas are interested in what makes a human. Erica is their latest creation – a semi-autonomous android, the product of the most funded scientific project in Japan. But these men regard themselves as artists more than scientists, and the Erica project – the result of a collaboration between Osaka and Kyoto universities and the Advanced Telecommunications Research Institute International – is a philosophical one as much as technological one.

Erica is interviewed about her hope and dreams – to be able to leave her room and to be able to move her arms and legs. She likes to chat with visitors and has one of the most advanced speech synthesis systems yet developed. Can she be regarded as being alive or as a comparable being to ourselves? Will she help us to understand ourselves and our interactions as humans better?

Erica and her creators are interviewed in the science fiction atmosphere of Ishiguro’s laboratory, and this film asks how we might form close relationships with robots in the future. Ishiguro thinks that for Japanese people especially, everything has a soul, whether human or not. If we don’t understand how human hearts, minds and personalities work, can we truly claim that humans have authenticity that machines don’t?

Ishiguro and Glas want to release Erica and her fellow robots into human society. Soon, Erica may be an essential part of our everyday life, as one of the new children of humanity.

Key credits

  • Director/Editor: Ilinca Calugareanu
  • Producer: Mara Adina
  • Executive producers for the Guardian: Charlie Phillips and Laurence Topham
  • This video is produced in collaboration with the Sundance Institute Short Documentary Fund supported by the John D and Catherine T MacArthur Foundation

You can also view the 14 min. film here.

Artworks generated by an AI system are to be sold at Christie’s auction house

KC Ifeanyi’s August 22, 2018 article for Fast Company may send a chill down some artists’ spines,

For the first time in its 252-year history, Christie’s will auction artwork generated by artificial intelligence.

Created by the French art collective Obvious, “Portrait of Edmond de Belamy” is part of a series of paintings of the fictional Belamy family that was created using a two-part algorithm. …

The portrait is estimated to sell anywhere between $7,000-$10,000, and Obvious says the proceeds will go toward furthering its algorithm.

… Famed collector Nicolas Laugero-Lasserre bought one of Obvious’s Belamy works in February, which could’ve been written off as a novel purchase where the story behind it is worth more than the piece itself. However, with validation from a storied auction house like Christie’s, AI art could shake the contemporary art scene.

“Edmond de Belamy” goes up for auction from October 23-25 [2018].

Jobs safe from automation? Are there any?

Michael Grothaus expresses more optimism about future job markets than I’m feeling in an August 30, 2018 article for Fast Company,

A 2017 McKinsey Global Institute study of 800 occupations across 46 countries found that by 2030, 800 million people will lose their jobs to automation. That’s one-fifth of the global workforce. A further one-third of the global workforce will need to retrain if they want to keep their current jobs as well. And looking at the effects of automation on American jobs alone, researchers from Oxford University found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”

The good news is that while the above stats are rightly cause for concern, they also reveal that 53% of American jobs and four-fifths of global jobs are unlikely to be affected by advances in artificial intelligence and robotics. But just what are those fields? I spoke to three experts in artificial intelligence, robotics, and human productivity to get their automation-proof career advice.

Creatives

“Although I believe every single job can, and will, benefit from a level of AI or robotic influence, there are some roles that, in my view, will never be replaced by technology,” says Tom Pickersgill, …

Maintenance foreman

When running a production line, problems and bottlenecks are inevitable–and usually that’s a bad thing. But in this case, those unavoidable issues will save human jobs because their solutions will require human ingenuity, says Mark Williams, head of product at People First, …

Hairdressers

Mat Hunter, director of the Central Research Laboratory, a tech-focused co-working space and accelerator for tech startups, have seen startups trying to create all kinds of new technologies, which has given him insight into just what machines can and can’t pull off. It’s lead him to believe that jobs like the humble hairdresser are safer from automation than those of, says, accountancy.

Therapists and social workers

Another automation-proof career is likely to be one involved in helping people heal the mind, says Pickersgill. “People visit therapists because there is a need for emotional support and guidance. This can only be provided through real human interaction–by someone who can empathize and understand, and who can offer advice based on shared experiences, rather than just data-driven logic.”

Teachers

Teachers are so often the unsung heroes of our society. They are overworked and underpaid–yet charged with one of the most important tasks anyone can have: nurturing the growth of young people. The good news for teachers is that their jobs won’t be going anywhere.

Healthcare workers

Doctors and nurses will also likely never see their jobs taken by automation, says Williams. While automation will no doubt better enhance the treatments provided by doctors and nurses the fact of the matter is that robots aren’t going to outdo healthcare workers’ ability to connect with patients and make them feel understood the way a human can.

Caretakers

While humans might be fine with robots flipping their burgers and artificial intelligence managing their finances, being comfortable with a robot nannying your children or looking after your elderly mother is a much bigger ask. And that’s to say nothing of the fact that even today’s most advanced robots don’t have the physical dexterity to perform the movements and actions carers do every day.

Grothaus does offer a proviso in his conclusion: certain types of jobs are relatively safe until developers learn to replicate qualities such as empathy in robots/AI.

It’s very confusing

There’s so much news about robots, artificial intelligence, androids, and cyborgs that it’s hard to keep up with it let alone attempt to get a feeling for where all this might be headed. When you add the fact that the term robots/artificial inteligence are often used interchangeably and that the distinction between robots/androids/cyborgs is not always clear any attempts to peer into the future become even more challenging.

At this point I content myself with tracking the situation and finding definitions so I can better understand what I’m tracking. Carmen Wong’s August 23, 2018 posting on the Signals blog published by Canada’s Centre for Commercialization of Regenerative Medicine (CCRM) offers some useful definitions in the context of an article about the use of artificial intelligence in the life sciences, particularly in Canada (Note: Links have been removed),

Artificial intelligence (AI). Machine learning. To most people, these are just buzzwords and synonymous. Whether or not we fully understand what both are, they are slowly integrating into our everyday lives. Virtual assistants such as Siri? AI is at work. The personalized ads you see when you are browsing on the web or movie recommendations provided on Netflix? Thank AI for that too.

AI is defined as machines having intelligence that imitates human behaviour such as learning, planning and problem solving. A process used to achieve AI is called machine learning, where a computer uses lots of data to “train” or “teach” itself, without human intervention, to accomplish a pre-determined task. Essentially, the computer keeps on modifying its algorithm based on the information provided to get to the desired goal.

Another term you may have heard of is deep learning. Deep learning is a particular type of machine learning where algorithms are set up like the structure and function of human brains. It is similar to a network of brain cells interconnecting with each other.

Toronto has seen its fair share of media-worthy AI activity. The Government of Canada, Government of Ontario, industry and multiple universities came together in March 2018 to launch the Vector Institute, with the goal of using AI to promote economic growth and improve the lives of Canadians. In May, Samsung opened its AI Centre in the MaRS Discovery District, joining a network of Samsung centres located in California, United Kingdom and Russia.

There has been a boom in AI companies over the past few years, which span a variety of industries. This year’s ranking of the top 100 most promising private AI companies covers 25 fields with cybersecurity, enterprise and robotics being the hot focus areas.

Wong goes on to explore AI deployment in the life sciences and concludes that human scientists and doctors will still be needed although she does note this in closing (Note: A link has been removed),

More importantly, empathy and support from a fellow human being could never be fully replaced by a machine (could it?), but maybe this will change in the future. We will just have to wait and see.

Artificial empathy is the term used in Lisa Morgan’s April 25, 2018 article for Information Week which unfortunately does not include any links to actual projects or researchers working on artificial empathy. Instead, the article is focused on how business interests and marketers would like to see it employed. FWIW, I have found a few references: (1) Artificial empathy Wikipedia essay (look for the references at the end of the essay for more) and (2) this open access article: Towards Artificial Empathy; How Can Artificial Empathy Follow the Developmental Pathway of Natural Empathy? by Minoru Asada.

Please let me know in the comments if you should have an insights on the matter in the comments section of this blog.

An easier way to make highly ordered porous films for commercial sensors

An April 3, 2017 news item on Nanowerk describes Japanese research into a new technique for producing MOF’s (metallic organic frameworks),

Osaka-based researchers developed a new method to create films of porous metal–organic frameworks fully aligned on inorganic substrates. The method is simple, requiring only that the substrate and an organic linker are mixed under mild conditions, and fast, producing perfectly aligned films within minutes. The films oriented fluorescent dye molecules within their pores, and the fluorescence response of these dyes was switched on or off simply by rotating the material in polarized light.

An April 3, 2017 Osaka University press release on the Alpha Gallileo news service, which originated the news item, explains more about MOFs and gives some details about the new technique,,

Metal–organic frameworks, or MOFs, are highly ordered crystalline structures made of metal ion nodes and organic molecule linkers. Many MOFs can take up and store gases, such as carbon dioxide or hydrogen, thanks to their porous, sponge-like structures.

MOFs are also potential chemical sensors. They can be designed to change color or display another optical signal if a particular molecule is taken up into the framework. However, most studies on MOFs are performed on tiny single crystals, which is not practical for the commercial development of these materials.

Chemists have now come a step closer to making commercially viable sensors that contain highly ordered MOFs, thanks to the collaboration of an international team of researchers at Osaka Prefecture University, Osaka University and Graz University of Technology. The method will allow researchers to fabricate large tailor-made MOF films on any substrate of any size, which will vastly improve their prospects for commercial development.

In a study recently published in Nature Materials and highlighted on the cover and in the ‘News and Views’ section of the journal, the Osaka-based researchers report a one-step method to prepare thin MOF films directly on inorganic copper hydroxide substrates. Using this method, the researchers produced large MOF films with areas of more than 1 cm2 that were, for the first time, fully aligned with the crystal lattice of the underlying substrate.

Noting that microcrystals of copper hydroxide can be converted into MOFs by adding organic linker molecules under mild conditions, the researchers used the same strategy to create a thin MOF layer on larger copper hydroxide substrates. They carefully chose the carboxylic acid-based linker molecule 1,4-dibenzenedicarboxylic acid because it fit exactly to the spacing between the copper atoms on the substrate surface.

A MOF film began to grow on the copper hydroxide substrates within minutes of mixing it with the linker molecule, making this technique much easier and faster than previous step-wise approaches to build up MOF films. Using microscopy and X-ray diffraction techniques, the researchers found that the film was precisely oriented along the copper hydroxide lattice.

To demonstrate the unique optical behavior of their films, the researchers filled the MOF’s ordered pores with fluorescent molecules, which fluoresce when light is shone on them in a particular direction. When they shone polarized light on the ordered material, the researchers found that they could easily switch the fluorescence response on or off simply by rotating the material.

Here’s a link to and a citation for the paper,

Centimetre-scale micropore alignment in oriented polycrystalline metal–organic framework films via heteroepitaxial growth by Paolo Falcaro, Kenji Okada, Takaaki Hara, Ken Ikigaki, Yasuaki Tokudome, Aaron W. Thornton, Anita J. Hill, Timothy Williams, Christian Doonan, & Masahide Takahashi. Nature Materials 16, 342–348  (2017) doi:10.1038/nmat4815 Published online 05 December 2016

This paper is behind a paywall.

Ishiguro’s robots and Swiss scientist question artificial intelligence at SXSW (South by Southwest) 2017

It seems unexpected to stumble across presentations on robots and on artificial intelligence at an entertainment conference such as South by South West (SXSW). Here’s why I thought so, from the SXSW Wikipedia entry (Note: Links have been removed),

South by Southwest (abbreviated as SXSW) is an annual conglomerate of film, interactive media, and music festivals and conferences that take place in mid-March in Austin, Texas, United States. It began in 1987, and has continued to grow in both scope and size every year. In 2011, the conference lasted for 10 days with SXSW Interactive lasting for 5 days, Music for 6 days, and Film running concurrently for 9 days.

Lifelike robots

The 2017 SXSW Interactive featured separate presentations by Japanese roboticist, Hiroshi Ishiguro (mentioned here a few times), and EPFL (École Polytechnique Fédérale de Lausanne; Switzerland) artificial intelligence expert, Marcel Salathé.

Ishiguro’s work is the subject of Harry McCracken’s March 14, 2017 article for Fast Company (Note: Links have been removed),

I’m sitting in the Japan Factory pavilion at SXSW in Austin, Texas, talking to two other attendees about whether human beings are more valuable than robots. I say that I believe human life to be uniquely precious, whereupon one of the others rebuts me by stating that humans allow cars to exist even though they kill humans.

It’s a reasonable point. But my fellow conventioneer has a bias: It’s a robot itself, with an ivory-colored, mask-like face and visible innards. So is the third participant in the conversation, a much more human automaton modeled on a Japanese woman and wearing a black-and-white blouse and a blue scarf.

We’re chatting as part of a demo of technologies developed by the robotics lab of Hiroshi Ishiguro, based at Osaka University, and Japanese telecommunications company NTT. Ishiguro has gained fame in the field by creating increasingly humanlike robots—that is, androids—with the ultimate goal of eliminating the uncanny valley that exists between people and robotic people.

I also caught up with Ishiguro himself at the conference—his second SXSW—to talk about his work. He’s a champion of the notion that people will respond best to robots who simulate humanity, thereby creating “a feeling of presence,” as he describes it. That gives him and his researchers a challenge that encompasses everything from technology to psychology. “Our approach is quite interdisciplinary,” he says, which is what prompted him to bring his work to SXSW.

A SXSW attendee talks about robots with two robots.

If you have the time, do read McCracken’t piece in its entirety.

You can find out more about the ‘uncanny valley’ in my March 10, 2011 posting about Ishiguro’s work if you scroll down about 70% of the way to find the ‘uncanny valley’ diagram and Masahiro Mori’s description of the concept he developed.

You can read more about Ishiguro and his colleague, Ryuichiro Higashinaka, on their SXSW biography page.

Artificial intelligence (AI)

In a March 15, 2017 EPFL press release by Hilary Sanctuary, scientist Marcel Salathé poses the question: Is Reliable Artificial Intelligence Possible?,

In the quest for reliable artificial intelligence, EPFL scientist Marcel Salathé argues that AI technology should be openly available. He will be discussing the topic at this year’s edition of South by South West on March 14th in Austin, Texas.

Will artificial intelligence (AI) change the nature of work? For EPFL theoretical biologist Marcel Salathé, the answer is invariably yes. To him, a more fundamental question that needs to be addressed is who owns that artificial intelligence?

“We have to hold AI accountable, and the only way to do this is to verify it for biases and make sure there is no deliberate misinformation,” says Salathé. “This is not possible if the AI is privatized.”

AI is both the algorithm and the data

So what exactly is AI? It is generally regarded as “intelligence exhibited by machines”. Today, it is highly task specific, specially designed to beat humans at strategic games like Chess and Go, or diagnose skin disease on par with doctors’ skills.

On a practical level, AI is implemented through what scientists call “machine learning”, which means using a computer to run specifically designed software that can be “trained”, i.e. process data with the help of algorithms and to correctly identify certain features from that data set. Like human cognition, AI learns by trial and error. Unlike humans, however, AI can process and recall large quantities of data, giving it a tremendous advantage over us.

Crucial to AI learning, therefore, is the underlying data. For Salathé, AI is defined by both the algorithm and the data, and as such, both should be publicly available.

Deep learning algorithms can be perturbed

Last year, Salathé created an algorithm to recognize plant diseases. With more than 50,000 photos of healthy and diseased plants in the database, the algorithm uses artificial intelligence to diagnose plant diseases with the help of your smartphone. As for human disease, a recent study by a Stanford Group on cancer showed that AI can be trained to recognize skin cancer slightly better than a group of doctors. The consequences are far-reaching: AI may one day diagnose our diseases instead of doctors. If so, will we really be able to trust its diagnosis?

These diagnostic tools use data sets of images to train and learn. But visual data sets can be perturbed that prevent deep learning algorithms from correctly classifying images. Deep neural networks are highly vulnerable to visual perturbations that are practically impossible to detect with the naked eye, yet causing the AI to misclassify images.

In future implementations of AI-assisted medical diagnostic tools, these perturbations pose a serious threat. More generally, the perturbations are real and may already be affecting the filtered information that reaches us every day. These vulnerabilities underscore the importance of certifying AI technology and monitoring its reliability.

h/t phys.org March 15, 2017 news item

As I noted earlier, these are not the kind of presentations you’d expect at an ‘entertainment’ festival.

Peripheral nerves (a rat’s) regenerated when wrapped with nanomesh fiber

A Feb.28,2017 news item on Nanowerk announces a proposed nerve regeneration technique (Note: A link has been removed),

A research team consisting of Mitsuhiro Ebara, MANA associate principal investigator, Mechanobiology Group, NIMS, and Hiroyuki Tanaka, assistant professor, Orthopaedic Surgery, Osaka University Graduate School of Medicine, developed a mesh which can be wrapped around injured peripheral nerves to facilitate their regeneration and restore their functions (Acta Biomaterialia, “Electrospun nanofiber sheets incorporating methylcobalamin promote nerve regeneration and functional recovery in a rat sciatic nerve crush injury model”).

This mesh incorporates vitamin B12—a substance vital to the normal functioning of nervous systems—which is very soft and degrades in the body. When the mesh was applied to injured sciatic nerves in rats, it promoted nerve regeneration and recovery of their motor and sensory functions.

A Feb. 27, 2017 Japan National Institute for Materials Science (NIMS) press release for Osaka University, which originated the news item, provides more detail,

Artificial nerve conduits have been developed in the past to treat peripheral nerve injuries, but they merely form a cross-link to the injury site and do not promote faster nerve regeneration. Moreover, their application is limited to relatively few patients suffering from a complete loss of nerve continuity. Vitamin B12 has been known to facilitate nerve regeneration, but oral administration of it has not proven to be very effective, and no devices capable of delivering vitamin B12 directly to affected sites had been available. Therefore, it had been hoped to develop such medical devices to actively promote nerve regeneration in the many patients who suffer from nerve injuries but have not lost nerve continuity.

The NIMS-Osaka University joint research team recently developed a special mesh that can be wrapped around an injured nerve which releases vitamin B12 (methylcobalamin) until the injury heals. By developing very fine mesh fibers (several hundred nanometers in diameter) and reducing the crystallinity of the fibers, the team successfully created a very soft mesh that can be wrapped around a nerve. This mesh is made of a biodegradable plastic which, when implanted in animals, is eventually eliminated from the body. In fact, experiments demonstrated that application of the mesh directly to injured sciatic nerves in rats resulted in regeneration of axons and recovery of motor and sensory functions within six weeks.

The team is currently negotiating with a pharmaceutical company and other organizations to jointly study clinical application of the mesh as a medical device to treat peripheral nerve disorders, such as CTS.

This study was supported by the JSPS KAKENHI program (Grant Number JP15K10405) and AMED’s Project for Japan Translational and Clinical Research Core Centers (also known as Translational Research Network Program).

Figure 1. Conceptual diagram showing a nanofiber mesh incorporating vitamin B12 and its application to treat a peripheral nerve injury.

Here’s a link to and a citation for the paper,

Electrospun nanofiber sheets incorporating methylcobalamin promote nerve regeneration and functional recovery in a rat sciatic nerve crush injury model by Koji Suzuki, Hiroyuki Tanaka, Mitsuhiro Ebara, Koichiro Uto, Hozo Matsuoka, Shunsuke Nishimoto, Kiyoshi Okada, Tsuyoshi Murase, Hideki Yoshikawa. Acta Biomaterialia http://dx.doi.org/10.1016/j.actbio.2017.02.004 Available online 5 February 2017

This paper is behind a paywall.

nano tech 2017 being held in Tokyo from February 15-17, 2017

I found some news about the Alberta technology scene in the programme for Japan’s nano tech 2017 exhibition and conference to be held Feb. 15 – 17, 2017 in Tokyo. First, here’s more about the show in Japan from a Jan. 17, 2017 nano tech 2017 press release on Business Wire (also on Yahoo News),

The nano tech executive committee (chairman: Tomoji Kawai, Specially Appointed Professor, Osaka University) will be holding “nano tech 2017” – one of the world’s largest nanotechnology exhibitions, now in its 16th year – on February 15, 2017, at the Tokyo Big Sight convention center in Japan. 600 organizations (including over 40 first-time exhibitors) from 23 countries and regions are set to exhibit at the event in 1,000 booths, demonstrating revolutionary and cutting edge core technologies spanning such industries as automotive, aerospace, environment/energy, next-generation sensors, cutting-edge medicine, and more. Including attendees at the concurrently held exhibitions, the total number of visitors to the event is expected to exceed 50,000.

The theme of this year’s nano tech exhibition is “Open Nano Collaboration.” By bringing together organizations working in a wide variety of fields, the business matching event aims to promote joint development through cross-field collaboration.

Special Symposium: “Nanotechnology Contributing to the Super Smart Society”

Each year nano tech holds Special Symposium, in which industry specialists from top organizations from Japan and abroad speak about the issues surrounding the latest trends in nanotech. The themes of this year’s Symposium are Life Nanotechnology, Graphene, AI/IoT, Cellulose Nanofibers, and Materials Informatics.

Notable sessions include:

Life Nanotechnology
“Development of microRNA liquid biopsy for early detection of cancer”
Takahiro Ochiya, National Cancer Center Research Institute Division of Molecular and Cellular Medicine, Chief

AI / IoT
“AI Embedded in the Real World”
Hideki Asoh, AIST Deputy Director, Artificial Intelligence Research Center

Cellulose Nanofibers [emphasis mine]
“The Current Trends and Challenges for Industrialization of Nanocellulose”
Satoshi Hirata, Nanocellulose Forum Secretary-General

Materials Informatics
“Perspective of Materials Research”
Hideo Hosono, Tokyo Institute of Technology Professor

View the full list of sessions:
>> http://nanotech2017.icsbizmatch.jp/Presentation/en/Info/List#main_theater

nano tech 2017 Homepage:
>> http://nanotechexpo.jp/

nano tech 2017, the 16th International Nanotechnology Exhibition & Conference
Date: February 15-17, 2017, 10:00-17:00
Venue: Tokyo Big Sight (East Halls 4-6 & Conference Tower)
Organizer: nano tech Executive Committee, JTB Communication Design

As you may have guessed the Alberta information can be found in the .Cellulose Nanofibers session. From the conference/seminar program page; scroll down about 25% of the way to find the Alberta presentation,

Production and Applications Development of Cellulose Nanocrystals (CNC) at InnoTech Alberta

Behzad (Benji) Ahvazi
InnoTech Alberta Team Lead, Cellulose Nanocrystals (CNC)

[ Abstract ]

The production and use of cellulose nanocrystals (CNC) is an emerging technology that has gained considerable interest from a range of industries that are working towards increased use of “green” biobased materials. The construction of one-of-a-kind CNC pilot plant [emphasis mine] at InnoTech Alberta and production of CNC samples represents a critical step for introducing the cellulosic based biomaterials to industrial markets and provides a platform for the development of novel high value and high volume applications. Major key components including feedstock, acid hydrolysis formulation, purification, and drying processes were optimized significantly to reduce the operation cost. Fully characterized CNC samples were provided to a large number of academic and research laboratories including various industries domestically and internationally for applications development.

[ Profile ]

Dr. Ahvazi completed his Bachelor of Science in Honours program at the Department of Chemistry and Biochemistry and graduated with distinction at Concordia University in Montréal, Québec. His Ph.D. program was completed in 1998 at McGill Pulp and Paper Research Centre in the area of macromolecules with solid background in Lignocellulosic, organic wood chemistry as well as pulping and paper technology. After completing his post-doctoral fellowship, he joined FPInnovations formally [formerly?] known as PAPRICAN as a research scientist (R&D) focusing on a number of confidential chemical pulping and bleaching projects. In 2006, he worked at Tembec as a senior research scientist and as a Leader in Alcohol and Lignin (R&D). In April 2009, he held a position as a Research Officer in both National Bioproducts (NBP1 & NBP2) and Industrial Biomaterials Flagship programs at National Research Council Canada (NRC). During his tenure, he had directed and performed innovative R&D activities within both programs on extraction, modification, and characterization of biomass as well as polymer synthesis and formulation for industrial applications. Currently, he is working at InnoTech Alberta as Team Lead for Biomass Conversion and Processing Technologies.

Canada scene update

InnoTech Alberta was until Nov. 1, 2016 known as Alberta Innovates – Technology Futures. Here’s more about InnoTech Alberta from the Alberta Innovates … home page,

Effective November 1, 2016, Alberta Innovates – Technology Futures is one of four corporations now consolidated into Alberta Innovates and a wholly owned subsidiary called InnoTech Alberta.

You will find all the existing programs, services and information offered by InnoTech Alberta on this website. To access the basic research funding and commercialization programs previously offered by Alberta Innovates – Technology Futures, explore here. For more information on Alberta Innovates, visit the new Alberta Innovates website.

As for InnoTech Alberta’s “one-of-a-kind CNC pilot plant,” I’d like to know more about it’s one-of-a-kind status since there are two other CNC production plants in Canada. (Is the status a consequence of regional chauvinism or a writer unfamiliar with the topic?). Getting back to the topic, the largest company (and I believe the first) with a CNC plant was CelluForce, which started as a joint venture between Domtar and FPInnovations and powered with some very heavy investment from the government of Canada. (See my July 16, 2010 posting about the construction of the plant in Quebec and my June 6, 2011 posting about the newly named CelluForce.) Interestingly, CelluForce will have a booth at nano tech 2017 (according to its Jan. 27, 2017 news release) although the company doesn’t seem to have any presentations on the schedule. The other Canadian company is Blue Goose Biorefineries in Saskatchewan. Here’s more about Blue Goose from the company website’s home page,

Blue Goose Biorefineries Inc. (Blue Goose) is pleased to introduce our R3TM process. R3TM technology incorporates green chemistry to fractionate renewable plant biomass into high value products.

Traditionally, separating lignocellulosic biomass required high temperatures, harsh chemicals, and complicated processes. R3TM breaks this costly compromise to yield high quality cellulose, lignin and hemicellulose products.

The robust and environmentally friendly R3TM technology has numerous applications. Our current product focus is cellulose nanocrystals (CNC). Cellulose nanocrystals are “Mother Nature’s Building Blocks” possessing unique properties. These unique properties encourage the design of innovative products from a safe, inherently renewable, sustainable, and carbon neutral resource.

Blue Goose assists companies and research groups in the development of applications for CNC, by offering CNC for sale without Intellectual Property restrictions. [emphasis mine]

Bravo to Blue Goose! Unfortunately, I was not able to determine if the company will be at nano tech 2017.

One final comment, there was some excitement about CNC a while back where I had more than one person contact me asking for information about how to buy CNC. I wasn’t able to be helpful because there was, apparently, an attempt by producers to control sales and limit CNC access to a select few for competitive advantage. Coincidentally or not, CelluForce developed a stockpile which has persisted for some years as I noted in my Aug. 17, 2016 posting (scroll down about 70% of the way) where the company announced amongst other events that it expected deplete its stockpile by mid-2017.

Watching motor proteins at work

Researchers in the UK and in Japan have described these motor proteins as ‘swinging on monkey bars’,

A Sept. 14, 2015 news item on Nanowerk provides more information about the motor protein observations,

These proteins are vital to complex life, forming the transport infrastructure that allows different parts of cells to specialise in particular functions. Until now, the way they move has never been directly observed.

Researchers at the University of Leeds and in Japan used electron microscopes to capture images of the largest type of motor protein, called dynein, during the act of stepping along its molecular track.

A Sept 14, 2015 Leeds University press release, (also on EurekAlert*) which originated the news item, expands on the theme with what amounts to a transcript of sorts for the video (Note: Links have been removed),

Dr Stan Burgess, at the University of Leeds’ School of Molecular and Cellular Biology, who led the research team, said: “Dynein has two identical motors tied together and it moves along a molecular track called a microtubule. It drives itself along the track by alternately grabbing hold of a binding site, executing a power stroke, then letting go, like a person swinging on monkey bars.

“Previously, dynein movement had only been tracked by attaching fluorescent molecules to the proteins and observing the fluorescence using very powerful light microscopes. It was a bit like tracking vehicles from space with GPS. It told us where they were, their speed and for how long they ran, stopped and so on, but we couldn’t see the molecules in action themselves. These are the first images of these vital processes.”

An understanding of motor proteins is important to medical research because of their fundamental role in complex cellular life. Many viruses hijack motor proteins to hitch a ride to the nucleus for replication. Cell division is driven by motor proteins and so insights into their mechanics could be relevant to cancer research. Some motor neurone diseases are also associated with disruption of motor protein traffic.

The team at Leeds, working within the world-leading Astbury Centre for Structural Molecular Biology, combined purified microtubules with purified dynein motors and added the chemical fuel ATP (adenosine triphosphate) to power the motor.

Dr Hiroshi Imai, now Assistant Professor in the Department of Biological Sciences at Chuo University, Japan, carried out the experiments while working at the University of Leeds.

He explained: “We set the dyneins running along their tracks and then we froze them in ‘mid-stride’ by cooling them at about a million degrees a second, fast enough to prevent the water from forming ice crystals as it solidified. Then using a cryo-electron microscope we took many thousands of images of the motors caught during the act of stepping. By combining many images of individual motors, we were able to sharpen up our picture of the dynein and build up a dynamic idea of how it moved. It is a bit like figuring out how to swing along monkey bars by studying photographs of many people swinging on them.”

Dr Burgess said: “Our most striking discovery was the existence of a hinge between the long, thin stalk and the ‘grappling hook’, like the wrist between a human arm and hand. This allows a lot of variation in the angle of attachment of the motor to its track.

“Each of the two arms of a dynein motor protein is about 25 nanometres (0.000025 millimetre) long, while the binding sites it attaches to are only 8 nanometres apart. That means dynein can reach not only the next rung but the one after that and the one after that and appears to give it flexibility in how it moves along the ‘track’.”

Dynein is not only the biggest but also the most versatile of the motor proteins in living cells and, like all motor proteins, is vital to life. Motor proteins transport cargoes and hold many cellular components in position within the cell. For instance, dynein is responsible for carrying messages from the tips of active nerve cells back to the nucleus and these messages keep the nerve cells alive.

Co-author Peter Knight, Professor of Molecular Contractility in the University of Leeds’ School of Molecular and Cellular Biology, said: “If a cell is like a city, these are like the truckers on its road and rail networks. If you didn’t have a transport system, you couldn’t have specialised regions. Every part of the cell would be doing the same thing and that would mean you could not have complex life.”

“Dynein is the multi-purpose vehicle of cellular transport. Other motor proteins, called kinesins and myosins, are much smaller and have specific functions, but dynein can turn its hand to a lot of different of functions,” Professor Knight said.

For instance, in the motor neurone connecting the central nervous system to the big toe—which is a single cell a metre long— dynein provides the transport from the toe back to the nucleus. Another vital role is in the movement of cells.

Dr Burgess said: “During brain development, neurones must crawl into their correct position and dynein molecules in this instance grab hold of the nucleus and pull it along with the moving mass of the cell. If they didn’t, the nucleus would be left behind and the cytoplasm would crawl away.”

The study involved researchers from the University of Leeds and Japan’s Waseda and Osaka universities, as well as the Quantitative Biology Center at Japan’s Riken research institute and the Japan Science and Technology Agency (JST). The research was funded by the Human Frontiers Science Program and the Biotechnology and Biological Sciences Research Council (BBSRC).

Here’s a link to and a citation for the paper,

Direct observation shows superposition and large scale flexibility within cytoplasmic dynein motors moving along microtubules by Hiroshi Imai, Tomohiro Shima, Kazuo Sutoh, Matthew L. Walker, Peter J. Knight, Takahide Kon, & Stan A. Burgess. Nature Communications 6, Article number: 8179  doi:10.1038/ncomms9179 Published 14 September 2015

This paper is open access.

*The EurekAlert link added Sept. 15, 2015 at 1200 hours PST.

What about the heart? and the quest to make androids lifelike

Japanese scientist Hiroshi Ishiguro has been mentioned here several times in the context of ‘lifelike’ robots. Accordingly, it’s no surprise to see Ishiguro’s name in a June 24, 2014 news item about uncannily lifelike robotic tour guides in a Tokyo museum (CBC (Canadian Broadcasting Corporation) News online),

The new robot guides at a Tokyo museum look so eerily human and speak so smoothly they almost outdo people — almost.

Japanese robotics expert Hiroshi Ishiguro, an Osaka University professor, says they will be useful for research on how people interact with robots and on what differentiates the person from the machine.

“Making androids is about exploring what it means to be human,” he told reporters Tuesday [June 23, 2014], “examining the question of what is emotion, what is awareness, what is thinking.”

In a demonstration, the remote-controlled machines moved their pink lips in time to a voice-over, twitched their eyebrows, blinked and swayed their heads from side to side. They stay seated but can move their hands.

Ishiguro and his robots were also mentioned in a May 29, 2014 article by Carey Dunne for Fast Company. The article concerned a photographic project of Luisa Whitton’s.

In her series “What About the Heart?,” British photographer Luisa Whitton documents one of the creepiest niches of the Japanese robotics industry--androids. Here, an eerily lifelike face made for a robot. [dowloaded from http://www.fastcodesign.com/3031125/exposure/japans-uncanny-quest-to-humanize-robots?partner=rss]

In her series “What About the Heart?,” British photographer Luisa Whitton documents one of the creepiest niches of the Japanese robotics industry–androids. Here, an eerily lifelike face made for a robot. [dowloaded from http://www.fastcodesign.com/3031125/exposure/japans-uncanny-quest-to-humanize-robots?partner=rss]

From Dunne’s May 29, 2014 article (Note: Links have been removed),

We’re one step closer to a robot takeover. At least, that’s one interpretation of “What About the Heart?” a new series by British photographer Luisa Whitton. In 17 photos, Whitton documents one of the creepiest niches of the Japanese robotics industry–androids. These are the result of a growing group of scientists trying to make robots look like living, breathing people. Their efforts pose a question that’s becoming more relevant as Siri and her robot friends evolve: what does it mean to be human as technology progresses?

Whitton spent several months in Japan working with Hiroshi Ishiguro, a scientist who has constructed a robotic copy of himself. Ishiguro’s research focused on whether his robotic double could somehow possess his “Sonzai-Kan,” a Japanese term that translates to the “presence” or “spirit” of a person. It’s work that blurs the line between technology, philosophy, psychology, and art, using real-world studies to examine existential issues once reserved for speculation by the likes of Philip K. Dick or Sigmund Freud. And if this sounds like a sequel to Blade Runner, it gets weirder: after Ishiguro aged, he had plastic surgery so that his face still matched that of his younger, mechanical doppelganger.

I profiled Ishiguro’s robots (then called Geminoids) in a March 10, 2011 posting which featured a Danish philosopher, Henrik Scharfe, who’d commissioned a Geminoid identical to himself for research purposes. He doesn’t seem to have published any papers about his experience but there is this interview of Scharfe and his Geminoid twin by Aldith Hunkar (she’s very good) at a 2011 TEDxAmsterdam,

Mary King’s 2007 research project notes a contrast, Robots and AI in Japan and The West and provides an excellent primer (Note: A link has been removed),

The Japanese scientific approach and expectations of robots and AI are far more down to earth than those of their Western counterparts. Certainly, future predictions made by Japanese scientists are far less confrontational or sci-fi-like. In an interview via email, Canadian technology journalist Tim N. Hornyak described the Japanese attitude towards robots as being “that of the craftsman, not the philosopher” and cited this as the reason for “so many rosy imaginings of a future Japan in which robots are a part of people’s everyday lives.”

Hornyak, who is author of “Loving the Machine: The Art and Science of Japanese Robots,” acknowledges that apocalyptic visions do appear in manga and anime, but emphasizes that such forecasts do not exist in government circles or within Japanese companies. Hornyak also added that while AI has for many years taken a back seat to robot development in Japan, this situation is now changing. Honda, for example, is working on giving better brains to Asimo, which is already the world’s most advanced humanoid robot. Japan is also already legislating early versions of Asimov’s laws by introducing design requirements for next-generation mobile robots.

It does seem there might be more interest in the philosophical issues in Japan these days or possibly it’s a reflection of Ishiguro’s own current concerns (from Dunne’s May 29, 2014 article),

The project’s title derives from a discussion with Ishiguro about what it means to be human. “The definition of human will be more complicated,” Ishiguro said.

Dunne reproduces a portion of Whitton’s statement describing her purpose for these photographs,

Through Ishiguro, Whitton got in touch with a number of other scientists working on androids. “In the photographs, I am trying to subvert the traditional formula of portraiture and allure the audience into a debate on the boundaries that determine the dichotomy of the human/not human,” she writes in her artist statement. “The photographs become documents of objects that sit between scientific tool and horrid simulacrum.”

I’m not sure what she means by “horrid simulacrum” but she seems to be touching on the concept of the ‘uncanny valley’. Here’s a description I provided in a May 31, 2013 posting about animator Chris Landreth and his explorations of that valley within the context of his animated film, Subconscious Password,,

Landreth also discusses the ‘uncanny valley’ and how he deliberately cast his film into that valley. For anyone who’s unfamiliar with the ‘uncanny valley’ I wrote about it in a Mar. 10, 2011 posting concerning Geminoid robots,

It seems that researchers believe that the ‘uncanny valley’ doesn’t necessarily have to exist forever and at some point, people will accept humanoid robots without hesitation. In the meantime, here’s a diagram of the ‘uncanny valley’,

From the article on Android Science by Masahiro Mori (translated by Karl F. MacDorman and Takashi Minato)

Here’s what Mori (the person who coined the term) had to say about the ‘uncanny valley’ (from Android Science),

Recently there are many industrial robots, and as we know the robots do not have a face or legs, and just rotate or extend or contract their arms, and they bear no resemblance to human beings. Certainly the policy for designing these kinds of robots is based on functionality. From this standpoint, the robots must perform functions similar to those of human factory workers, but their appearance is not evaluated. If we plot these industrial robots on a graph of familiarity versus appearance, they lie near the origin (see Figure 1 [above]). So they bear little resemblance to a human being, and in general people do not find them to be familiar. But if the designer of a toy robot puts importance on a robot’s appearance rather than its function, the robot will have a somewhat humanlike appearance with a face, two arms, two legs, and a torso. This design lets children enjoy a sense of familiarity with the humanoid toy. So the toy robot is approaching the top of the first peak.

Of course, human beings themselves lie at the final goal of robotics, which is why we make an effort to build humanlike robots. For example, a robot’s arms may be composed of a metal cylinder with many bolts, but to achieve a more humanlike appearance, we paint over the metal in skin tones. These cosmetic efforts cause a resultant increase in our sense of the robot’s familiarity. Some readers may have felt sympathy for handicapped people they have seen who attach a prosthetic arm or leg to replace a missing limb. But recently prosthetic hands have improved greatly, and we cannot distinguish them from real hands at a glance. Some prosthetic hands attempt to simulate veins, muscles, tendons, finger nails, and finger prints, and their color resembles human pigmentation. So maybe the prosthetic arm has achieved a degree of human verisimilitude on par with false teeth. But this kind of prosthetic hand is too real and when we notice it is prosthetic, we have a sense of strangeness. So if we shake the hand, we are surprised by the lack of soft tissue and cold temperature. In this case, there is no longer a sense of familiarity. It is uncanny. In mathematical terms, strangeness can be represented by negative familiarity, so the prosthetic hand is at the bottom of the valley. So in this case, the appearance is quite human like, but the familiarity is negative. This is the uncanny valley.

[keep scrolling, I’m having trouble getting rid of this extra space below]

It seems that Mori is suggesting that as the differences between the original and the simulacrum become fewer and fewer, the ‘uncanny valley’ will disappear. It’s possible but I suspect before that day occurs those of us who were brought up in a world without synthetic humans (androids) may experience an intensification of the feelings aroused by an encounter with the uncanny valley even as it disappears. For those who’d like a preview, check out Luisa Whitton’s What About The Heart? project.

Journal of Responsible Innovation is launched and there’s a nanotechnology connection

According to an Oct. 30, 2013 news release from the Taylor & Francis Group, there’s a new journal being launched, which is good news for anyone looking to get their research or creative work (which retains scholarly integrity) published in a journal focused on emerging technologies and innovation,

Journal of Responsible Innovation will focus on intersections of ethics, societal outcomes, and new technologies: New to Routledge for 2014 [Note: Routledge is a Taylor & Francis Group brand]

Scholars and practitioners in the emerging interdisciplinary field known as “responsible innovation” now have a new place to publish their work. The Journal of Responsible Innovation (JRI) will offer an opportunity to articulate, strengthen, and critique perspectives about the role of responsibility in the research and development process. JRI will also provide a forum for discussions of ethical, social and governance issues that arise in a society that places a great emphasis on innovation.

Professor David Guston, director of the Center for Nanotechnology in Society at Arizona State University and co-director of the Consortium for Science, Policy and Outcomes, is the journal’s founding editor-in-chief. [emphasis mine] The Journal will publish three issues each year, beginning in early 2014.

“Responsible innovation isn’t necessarily a new concept, but a research community is forming and we’re starting to get real traction in the policy world,” says Guston. “It is our hope that the journal will help solidify what responsible innovation can mean in both academic and industrial laboratories as well as in governments.”

“Taylor & Francis have been working with the scholarly community for over two centuries and over the past 20 years, we have launched more new journals than any other publisher, all offering peer-reviewed, cutting-edge research,” adds Editorial Director Richard Steele. “We are proud to be working with David Guston and colleagues to create a lively forum in which to publish and debate research on responsible technological innovation.”

An emerging and interdisciplinary field

The term “responsible innovation” is often associated with emerging technologies—for example, nanotechnology, synthetic biology, geoengineering, and artificial intelligence—due to their uncertain but potentially revolutionary influence on society. [emphasis mine] Responsible innovation represents an attempt to think through the ethical and social complexities of these technologies before they become mainstream. And due to the broad impacts these technologies may have, responsible innovation often involves people working in a variety of roles in the innovation process.

Bearing this interdisciplinarity in mind, the Journal of Responsible Innovation (JRI) will publish not only traditional journal articles and research reports, but also reviews and perspectives on current political, technical, and cultural events. JRI will publish authors from the social sciences and the natural sciences, from ethics and engineering, and from law, design, business, and other fields. It especially hopes to see collaborations across these fields, as well.

“We want JRI to help organize a research network focused around complex societal questions,” Guston says. “Work in this area has tended to be scattered across many journals and disciplines. We’d like to bring those perspectives together and start sharing our research more effectively.”

Now accepting manuscripts

JRI is now soliciting submissions from scholars and practitioners interested in research questions and public issues related to responsible innovation. [emphasis mine] The journal seeks traditional research articles; perspectives or reviews containing opinion or critique of timely issues; and pedagogical approaches to teaching and learning responsible innovation. More information about the journal and the submission process can be found at www.tandfonline.com/tjri.

About The Center for Nanotechnology in Society at ASU

The Center for Nanotechnology in Society at ASU (CNS-ASU) is the world’s largest center on the societal aspects of nanotechnology. CNS-ASU develops programs that integrate academic and societal concerns in order to better understand how to govern new technologies, from their birth in the laboratory to their entrance into the mainstream.

—————————————–
About Taylor & Francis Group

—————————————–

Taylor & Francis Group partners with researchers, scholarly societies, universities and libraries worldwide to bring knowledge to life.  As one of the world’s leading publishers of scholarly journals, books, ebooks and reference works our content spans all areas of Humanities, Social Sciences, Behavioural Sciences, Science, and Technology and Medicine.

From our network of offices in Oxford, New York, Philadelphia, Boca Raton, Boston, Melbourne, Singapore, Beijing, Tokyo, Stockholm, New Delhi and Johannesburg, Taylor & Francis staff provide local expertise and support to our editors, societies and authors and tailored, efficient customer service to our library colleagues.

You can find out more about the Journal of Responsible Innovation here, including information for would-be contributors,

JRI invites three kinds of written contributions: research articles of 6,000 to 10,000 words in length, inclusive of notes and references, that communicate original theoretical or empirical investigations; perspectives of approximately 2,000 words in length that communicate opinions, summaries, or reviews of timely issues, publications, cultural or social events, or other activities; and pedagogy, communicating in appropriate length experience in or studies of teaching, training, and learning related to responsible innovation in formal (e.g., classroom) and informal (e.g., museum) environments.

JRI is open to alternative styles or genres of writing beyond the traditional research paper or report, including creative or narrative nonfiction, dialogue, and first-person accounts, provided that scholarly completeness and integrity are retained.[emphases mine] As the journal’s online environment evolves, JRI intends to invite other kinds of contributions that could include photo-essays, videos, etc. [emphasis mine]

I like to check out the editorial board for these things (from the JRI’s Editorial board webpage; Note: Links have been removed),,

Editor-in-Chief

David. H. Guston , Arizona State University, USA

Associate Editors

Erik Fisher , Arizona State University, USA
Armin Grunwald , ITAS , Karlsruhe Institute of Technology, Germany
Richard Owen , University of Exeter, UK
Tsjalling Swierstra , Maastricht University, the Netherlands
Simone van der Burg, University of Twente, the Netherlands

Editorial Board

Wiebe Bijker , University of Maastricht, the Netherlands
Francesca Cavallaro, Fundacion Tecnalia Research & Innovation, Spain
Heather Douglas , University of Waterloo, Canada
Weiwen Duan , Chinese Academy of Social Sciences, China
Ulrike Felt, University of Vienna, Austria
Philippe Goujon , University of Namur, Belgium
Jonathan Hankins , Bassetti Foundation, Italy
Aharon Hauptman , University of Tel Aviv, Israel
Rachelle Hollander , National Academy of Engineering, USA
Maja Horst , University of Copenhagen, Denmark
Noela Invernizzi , Federal University of Parana, Brazil
Julian Kinderlerer , University of Cape Town, South Africa
Ralf Lindner , Frauenhofer Institut, Germany
Philip Macnaghten , Durham University, UK
Andrew Maynard , University of Michigan, USA
Carl Mitcham , Colorado School of Mines, USA
Sachin Chaturvedi , Research and Information System for Developing Countries, India
René von Schomberg, European Commission, Belgium
Doris Schroeder , University of Central Lancashire, UK
Kevin Urama , African Technology Policy Studies Network, Kenya
Frank Vanclay , University of Groningen, the Netherlands
Jeroen van den Hoven, Technical University, Delft, the Netherlands
Fern Wickson , Genok Center for Biosafety, Norway
Go Yoshizawa , Osaka University, Japan

Good luck to the publishers and to those of you who will be making submissions. As for anyone who may be as curious as I was about the connection between Routledge and Francis & Taylor, go here and scroll down about 75% of the page (briefly, Routledge is a brand).