Tag Archives: touchscreens

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

‘Brewing up’ conductive inks for printable electronics

Scientists from Duke University aren’t exactly ‘brewing’ or ‘cooking up’ the inks but they do come close according to a Jan. 3, 2017 news item on ScienceDaily,

By suspending tiny metal nanoparticles in liquids, Duke University scientists are brewing up conductive ink-jet printer “inks” to print inexpensive, customizable circuit patterns on just about any surface.

A Jan. 3, 2017 Duke University news release (also on EurekAlert), which originated the news item, explains why this technique could lead to more accessible printed electronics,

Printed electronics, which are already being used on a wide scale in devices such as the anti-theft radio frequency identification (RFID) tags you might find on the back of new DVDs, currently have one major drawback: for the circuits to work, they first have to be heated to melt all the nanoparticles together into a single conductive wire, making it impossible to print circuits on inexpensive plastics or paper.

A new study by Duke researchers shows that tweaking the shape of the nanoparticles in the ink might just eliminate the need for heat.

By comparing the conductivity of films made from different shapes of silver nanostructures, the researchers found that electrons zip through films made of silver nanowires much easier than films made from other shapes, like nanospheres or microflakes. In fact, electrons flowed so easily through the nanowire films that they could function in printed circuits without the need to melt them all together.

“The nanowires had a 4,000 times higher conductivity than the more commonly used silver nanoparticles that you would find in printed antennas for RFID tags,” said Benjamin Wiley, assistant professor of chemistry at Duke. “So if you use nanowires, then you don’t have to heat the printed circuits up to such high temperature and you can use cheaper plastics or paper.”

“There is really nothing else I can think of besides these silver nanowires that you can just print and it’s simply conductive, without any post-processing,” Wiley added.

These types of printed electronics could have applications far beyond smart packaging; researchers envision using the technology to make solar cells, printed displays, LEDS, touchscreens, amplifiers, batteries and even some implantable bio-electronic devices. The results appeared online Dec. 16 [2016] in ACS Applied Materials and Interfaces.

Silver has become a go-to material for making printed electronics, Wiley said, and a number of studies have recently appeared measuring the conductivity of films with different shapes of silver nanostructures. However, experimental variations make direct comparisons between the shapes difficult, and few reports have linked the conductivity of the films to the total mass of silver used, an important factor when working with a costly material.

“We wanted to eliminate any extra materials from the inks and simply hone in on the amount of silver in the films and the contacts between the nanostructures as the only source of variability,” said Ian Stewart, a recent graduate student in Wiley’s lab and first author on the ACS paper.

Stewart used known recipes to cook up silver nanostructures with different shapes, including nanoparticles, microflakes, and short and long nanowires, and mixed these nanostructures with distilled water to make simple “inks.” He then invented a quick and easy way to make thin films using equipment available in just about any lab — glass slides and double-sided tape.

“We used a hole punch to cut out wells from double-sided tape and stuck these to glass slides,” Stewart said. By adding a precise volume of ink into each tape “well” and then heating the wells — either to relatively low temperature to simply evaporate the water or to higher temperatures to begin melting the structures together — he created a variety of films to test.

The team say they weren’t surprised that the long nanowire films had the highest conductivity. Electrons usually flow easily through individual nanostructures but get stuck when they have to jump from one structure to the next, Wiley explained, and long nanowires greatly reduce the number of times the electrons have to make this “jump”.

But they were surprised at just how drastic the change was. “The resistivity of the long silver nanowire films is several orders of magnitude lower than silver nanoparticles and only 10 times greater than pure silver,” Stewart said.

The team is now experimenting with using aerosol jets to print silver nanowire inks in usable circuits. Wiley says they also want to explore whether silver-coated copper nanowires, which are significantly cheaper to produce than pure silver nanowires, will give the same effect.

Here’s a link to and a citation for the paper,

Effect of Morphology on the Electrical Resistivity of Silver Nanostructure Films by Ian E. Stewart, Myung Jun Kim, and Benjamin J. Wiley. ACS Appl. Mater. Interfaces, Article ASAP
DOI: 10.1021/acsami.6b12289 Publication Date (Web): December 16, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall but there is an image of the silver nanowires, which is not exactly compensation but is interesting,

Caption: Duke University chemists have found that silver nanowire films like these conduct electricity well enough to form functioning circuits without applying high temperatures, enabling printable electronics on heat-sensitive materials like paper or plastic.
Credit: Ian Stewart and Benjamin Wiley

Animal technology: a touchscreen for your dog, sonar lunch orders for dolphins, and more

A rather unexpected (for ignorant folks like me) approach to animal technology has been taken by Ilyena Hirskyj-Douglas in her June 17, 2016 piece on phys.org,

Imagine leaving your dog at home while it turns on the smart TV and chooses a programme to watch. Meanwhile you visit a zoo where you play interactive touchscreen games with the apes and watch the dolphins using sonar to order their lunch. In the field behind you, a farmer is stroking his flock of chickens virtually, leaving the drones to collect sheep while the cows milk themselves. Welcome to the unusual world of animal technology.

Hirskyj-Douglas’s piece was originally published as a June 15, 2016 essay  about animal-computer interaction (ACI) and some of the latest work being done in the field on The Conversation website (Note: Links have been removed),

Animals have interacted with technology for a long time, from tracking devices for conservation research to zoos with early touchscreen computers. But more recently, the field of animal-computer interaction (ACI) has begun to explore in more detail exactly how animals use technology like this. The hope is that better understanding animals’ relationship with technology will means we can use it to monitor and improve their welfare.

My own research involves building intelligent tracking devices for dogs that let them interact with media on a screen so we can study how dogs use TV and what they like to watch (if anything). Perhaps unsurprisingly, I’ve found that dogs like to watch videos of other dogs. This has led me to track dogs dogs’ gaze across individual and multiple screens and attempts to work out how best to make media just for dogs.

Eventually I hope to make an interactive system that allows a dog to pick what they want to watch and that evolves by learning what media they like. This isn’t to create a toy for indulgent pet owners. Dogs are often left at home alone during the day or isolated in kennels. So interactive media technology could improve the animals’ welfare by providing a stimulus and a source of entertainment. …

This 2014 video (embedded in Hirskyj-Douglas’s essay) illustrates how touchscreens are used by great apes,

It’s all quite intriguing and I encourage you to read the essay in it entirety.

If you find the great apes project interesting, you can find  out more about it (I believe it’s in the Primate Research category) and others at the Atlanta Zoo’s research webpage.

Nanowalls (like waffles) for touchscreens

ETH Zurich has announced a new technique for creating transparent electrodes in a Jan. 6, 2016 news item on ScienceDaily,

Transparent electrodes have been manufactured for use in touchscreens using a novel nanoprinting process. The new electrodes are some of the most transparent and conductive that have ever been developed.

From smartphones to the operating interfaces of ticket machines and cash dispensers, every touchscreen we use requires transparent electrodes: The devices’ glass surface is coated with a barely visible pattern made of conductive material. It is because of this that the devices recognise whether and where exactly a finger is touching the surface.

Here’s an image illustrating the new electrodes,

With a special mode of electrohydrodynamic ink-jet printing scientists can create a grid of ultra fine gold walls. (Visualisations: Ben Newton / Digit Works)

With a special mode of electrohydrodynamic ink-jet printing scientists can create a grid of ultra fine gold walls. (Visualisations: Ben Newton / Digit Works)

I think these electrodes resemble waffles,

[downloaded from https://github.com/jhermann/Stack-O-Waffles] Credit: jherman

[downloaded from https://github.com/jhermann/Stack-O-Waffles] Credit: jherman

Getting back to the electrodes themselves, a Jan. 6, 2016 ETH Zurich press release (also on EurekAlert*)by Fabio Bergamin, which originated the news item, provides more details,

Researchers under the direction of Dimos Poulikakos, Professor of Thermodynamics, have now used 3D print technology to create a new type of transparent electrode, which takes the form of a grid made of gold or silver “nanowalls” on a glass surface. The walls are so thin that they can hardly be seen with the naked eye. It is the first time that scientists have created nanowalls like these using 3D printing. The new electrodes have a higher conductivity and are more transparent than those made of indium tin oxide, the standard material used in smartphones and tablets today. This is a clear advantage: The more transparent the electrodes, the better the screen quality. And the more conductive they are, the more quickly and precisely the touchscreen will work.

Third dimension

“Indium tin oxide is used because the material has a relatively high degree of transparency and the production of thin layers has been well researched, but it is only moderately conductive,” says Patrik Rohner, a PhD student in Poulikakos’ team. In order to produce more conductive electrodes, the ETH researchers opted for gold and silver, which conduct electricity much better. But because these metals are not transparent, the scientists had to make use of the third dimension. ETH professor Poulikakos explains: “If you want to achieve both high conductivity and transparency in wires made from these metals, you have a conflict of objectives. As the cross-sectional area of gold and silver wires grows, the conductivity increases, but the grid’s transparency decreases.”

The solution was to use metal walls only 80 to 500 nanometres thick, which are almost invisible when viewed from above. Because they are two to four times taller than they are wide, the cross-sectional area, and thus the conductivity, is sufficiently high.

Ink-jet printer with tiny print head

The researchers produced these tiny metal walls using a printing process known as Nanodrip, which Poulikakos and his colleagues developed three years ago. Its basic principle is a process called electrohydrodynamic ink-jet printing. In this process scientists use inks made from metal nanoparticles in a solvent; an electrical field draws ultra-small droplets of the metallic ink out of a glass capillary. The solvent evaporates quickly, allowing a three-dimensional structure to be built up drop by drop.

What is special about the Nanodrip process is that the droplets that come out of the glass capillary are about ten times smaller than the aperture itself. This allows for much smaller structures to be printed. “Imagine a water drop hanging from a tap that is turned off. And now imagine that another tiny droplet is hanging from this drop – we are only printing the tiny droplet,” Poulikakos explains. The researchers managed to create this special form of droplet by perfectly balancing the composition of metallic ink and the electromagnetic field used.

Cost-efficient production

The next big challenge will now be to upscale the method and develop the print process further so that it can be implemented on an industrial scale. To achieve this, the scientists are working with colleagues from ETH spin-off company Scrona.

Here’s a link to and a citation for the paper,

Electrohydrodynamic NanoDrip Printing of High Aspect Ratio Metal Grid Transparent Electrodes by Julian Schneider, Patrick Rohner, Deepankur Thureja, Martin Schmid, Patrick Galliker, Dimos Poulikalos. Advanced Functional Materials DOI: 10.1002/adfm.201503705 First published: 15 December 2015

This paper is behind a paywall.

*'(also on EurekAlert)’ added on Jan. 7, 2016.

Touchless displays with 2D nanosheets and sweat

Swiping touchscreens with your finger has become a dominant means of accessing information in many applications but there is at least one problem associated with this action. From an Oct. 2, 2015 news item on phys.org,

While touchscreens are practical, touchless displays would be even more so. That’s because, despite touchscreens having enabled the smartphone’s advance into our lives and being essential for us to be able to use cash dispensers or ticket machines, they do have certain disadvantages. Touchscreens suffer from mechanical wear over time and are a transmission path for bacteria and viruses. To avoid these problems, scientists at Stuttgart’s Max Planck Institute for Solid State Research and LMU Munich have now developed nanostructures that change their electrical and even their optical properties as soon as a finger comes anywhere near them.

Here’s what a touchless screen looks like when tracking,

Touchless colour change: A nanostructure containing alternating layers of phosphatoantimonate nanosheets and oxide ... [more] © Advanced Materials 2015/MPI for Solid State Research

Touchless colour change: A nanostructure containing alternating layers of phosphatoantimonate nanosheets and oxide … [more]
© Advanced Materials 2015/MPI for Solid State Research

An Oct. 1, 2015 Max Planck Institute press release, which originated the news item, gives technical details,

A touchless display may be able to capitalize on a human trait which is of vital importance, although sometimes unwanted: This is the fact that our body sweats – and is constantly emitting water molecules through tiny pores in the skin. Scientists of the Nanochemistry group led by Bettina Lotsch at the Max Planck Institute for Solid State Research in Stuttgart and the LMU Munich have now been able to visualize the transpiration of a finger with a special moisture sensor which reacts as soon as an object – like an index finger – approaches its surface, without touching it. The increasing humidity is converted into an electrical signal or translated into a colour change, thus enabling it to be measured.

Phosphatoantimonic acid is what enables it to do this. This acid is a crystalline solid at room temperature with a structure made up of antimony, phosphorous, oxygen and hydrogen atoms. “It’s long been known to scientists that this material is able to take up water and swells considerably in the process,” explained Pirmin Ganter, doctoral student at the Max Planck Institute for Solid State Research and the Chemistry Department at LMU Munich. This water uptake also changes the properties of the material. For instance, its electrical conductivity increases as the number of stored water molecules rises. This is what enables it to serve as a measure of ambient moisture.

A sandwich nanomaterial structure exposed to moisture also changes its colour

However, the scientists aren’t so interested in developing a new moisture sensor. What they really want is to use it in touchless displays. “Because these sensors react in a very local manner to any increase in moisture, it is quite conceivable that this sort of material with moisture-dependent properties could also be used for touchless displays and monitors,” said Ganter. Touchless screens of this kind would require nothing more than a finger to get near the display to change their electrical or optical properties – and with them the input signal – at a specific point on the display.

Taking phosphatoantimonate nanosheets as their basis, the Stuttgart scientists then developed a photonic nanostructure which reacts to the moisture by changing colour. “If this was built into a monitor, the users would then receive visible feedback to  their finger motion” explained Katalin Szendrei, also a doctoral student in Bettina Lotsch’s group. To this end, the scientists created a multilayer sandwich material with alternating layers of ultrathin phosphatoantimonate nanosheets and silicon dioxide (SiO2) or titanium dioxide nanoparticles (TiO2). Comprising more than ten layers, the stack ultimately reached a height of little more than one millionth of a metre.

For one thing, the colour of the sandwich material can be set via the thickness of the layers. And for another, the colour of the sandwich changes if the scientists increase the relative humidity in the immediate surroundings of the material, for instance by moving a finger towards the screen. “The reason for this lies in the storage of water molecules between the phosphatoantimonate layers, which makes the layers swell considerably,” explained Katalin Szendrei. “A change in the thickness of the layers in this process is accompanied by a change in the colour of the sensor – produced in a similar way to what gives colour to a butterfly wing or in mother-of-pearl.”

The material reacts to the humidity change within a few milliseconds

This is a property that is fundamentally well known and characteristic of so-called photonic crystals. But scientists had never before observed such a large colour change as they now have in the lab in Stuttgart. “The colour of the nanostructure turns from blue to red when a finger gets near, for example. In this way, the colour can be tuned through the whole of the visible spectrum depending on the amount of water vapour taken up,” stressed Bettina Lotsch.

The scientists’ new approach is not only captivating because of the striking colour change. What’s also important is the fact that the material reacts to the change in humidity within a few milliseconds – literally in the blink of an eye. Previously reported materials normally took several seconds or more to respond. That is much too slow for practical applications. And there’s another thing that other materials couldn’t always do: The sandwich structure consisting of phosphatoantimonate nanosheets and oxide nanoparticles is highly stable from a chemical perspective and responds selectively to water vapour.

A layer protecting against chemical influences has to let moisture through

The scientists can imagine their materials being used in much more than just future generations of smartphones, tablets or notebooks. “Ultimately, we could see touchless displays also being deployed in many places where people currently have to touch monitors to navigate,” said Bettina Lotsch. For instance in cash dispensers or ticket machines, or even at the weighing scales in the supermarket’s vegetable aisle. Displays in public placesthat are used by many different people would have distinct hygiene benefits if they were touchless.

But before we see them being used in such places, the scientists have a few more challenges to overcome. It’s important, for example, that the nanostructures can be produced economically. To minimize wear, the structures still need to be coated with a protective layer if they’re going to be used in anything like a display. And that, again, has to meet not one but two different requirements: It must protect the moisture-sensitive layers against chemical and mechanical influences. And it must, of course, let the moisture pass through. But the Stuttgart scientists have an idea for how to achieve that already. An idea they are currently starting to put into practice with an additional cooperation partner on board.

Dexter Johnson’s Oct. 2, 2015 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides some additional context for this research (Note: A link has been removed),

In a world where the “swipe” has become a dominant computer interface method along with moving and clicking the mouse, the question becomes what’s next? For researchers at Stuttgart’s Max Planck Institute for Solid State Research and LMU Munich, Germany, the answer continues to be a swipe, but one in which you don’t actually need to touch the screen with your finger. Researchers call these no-contact computer screens touchless positioning interfaces (TPI).

Here’s a link to and a citation for the paper,

Touchless Optical Finger Motion Tracking Based on 2D Nanosheets with Giant Moisture Responsiveness by Katalin Szendrei, Pirmin Ganter, Olalla Sànchez-Sobrado, Roland Eger, Alexander Kuhn, and Bettina V. Lotsch. Advanced Materials DOI: 10.1002/adma.201503463 Article first published online: 22 SEP 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.