Tag Archives: smartphones

Filmmaking beetles wearing teeny, tiny wireless cameras

Researchers at the University of Washington have developed a tiny camera that can ride aboard an insect. Here a Pinacate beetle explores the UW campus with the camera on its back. Credit: Mark Stone/University of Washington

Scientists at Washington University have created a removable wireless camera backpack for beetles and for tiny robots resembling beetles. I’m embedding a video shot by a beetle later in this post with a citation and link for the paper, near the end of this post where you’ll also find links to my other posts on insects and technology.

As for the latest on insects and technology, there’s a July 15, 2020 news item on ScienceDaily,

In the movie “Ant-Man,” the title character can shrink in size and travel by soaring on the back of an insect. Now researchers at the University of Washington have developed a tiny wireless steerable camera that can also ride aboard an insect, giving everyone a chance to see an Ant-Man view of the world.

The camera, which streams video to a smartphone at 1 to 5 frames per second, sits on a mechanical arm that can pivot 60 degrees. This allows a viewer to capture a high-resolution, panoramic shot or track a moving object while expending a minimal amount of energy. To demonstrate the versatility of this system, which weighs about 250 milligrams — about one-tenth the weight of a playing card — the team mounted it on top of live beetles and insect-sized robots.

A July 15, 2020 University of Washington news release (also on EurekAlert), which originated the news item, provides more technical detail (although I still have a few questions) about the work,

“We have created a low-power, low-weight, wireless camera system that can capture a first-person view of what’s happening from an actual live insect or create vision for small robots,” said senior author Shyam Gollakota, a UW associate professor in the Paul G. Allen School of Computer Science & Engineering. “Vision is so important for communication and for navigation, but it’s extremely challenging to do it at such a small scale. As a result, prior to our work, wireless vision has not been possible for small robots or insects.”

Typical small cameras, such as those used in smartphones, use a lot of power to capture wide-angle, high-resolution photos, and that doesn’t work at the insect scale. While the cameras themselves are lightweight, the batteries they need to support them make the overall system too big and heavy for insects — or insect-sized robots — to lug around. So the team took a lesson from biology.

“Similar to cameras, vision in animals requires a lot of power,” said co-author Sawyer Fuller, a UW assistant professor of mechanical engineering. “It’s less of a big deal in larger creatures like humans, but flies are using 10 to 20% of their resting energy just to power their brains, most of which is devoted to visual processing. To help cut the cost, some flies have a small, high-resolution region of their compound eyes. They turn their heads to steer where they want to see with extra clarity, such as for chasing prey or a mate. This saves power over having high resolution over their entire visual field.”

To mimic an animal’s vision, the researchers used a tiny, ultra-low-power black-and-white camera that can sweep across a field of view with the help of a mechanical arm. The arm moves when the team applies a high voltage, which makes the material bend and move the camera to the desired position. Unless the team applies more power, the arm stays at that angle for about a minute before relaxing back to its original position. This is similar to how people can keep their head turned in one direction for only a short period of time before returning to a more neutral position.

“One advantage to being able to move the camera is that you can get a wide-angle view of what’s happening without consuming a huge amount of power,” said co-lead author Vikram Iyer, a UW doctoral student in electrical and computer engineering. “We can track a moving object without having to spend the energy to move a whole robot. These images are also at a higher resolution than if we used a wide-angle lens, which would create an image with the same number of pixels divided up over a much larger area.”

The camera and arm are controlled via Bluetooth from a smartphone from a distance up to 120 meters away, just a little longer than a football field.

The researchers attached their removable system to the backs of two different types of beetles — a death-feigning beetle and a Pinacate beetle. Similar beetles have been known to be able to carry loads heavier than half a gram, the researchers said.

“We made sure the beetles could still move properly when they were carrying our system,” said co-lead author Ali Najafi, a UW doctoral student in electrical and computer engineering. “They were able to navigate freely across gravel, up a slope and even climb trees.”

The beetles also lived for at least a year after the experiment ended. [emphasis mine]

“We added a small accelerometer to our system to be able to detect when the beetle moves. Then it only captures images during that time,” Iyer said. “If the camera is just continuously streaming without this accelerometer, we could record one to two hours before the battery died. With the accelerometer, we could record for six hours or more, depending on the beetle’s activity level.”

The researchers also used their camera system to design the world’s smallest terrestrial, power-autonomous robot with wireless vision. This insect-sized robot uses vibrations to move and consumes almost the same power as low-power Bluetooth radios need to operate.

The team found, however, that the vibrations shook the camera and produced distorted images. The researchers solved this issue by having the robot stop momentarily, take a picture and then resume its journey. With this strategy, the system was still able to move about 2 to 3 centimeters per second — faster than any other tiny robot that uses vibrations to move — and had a battery life of about 90 minutes.

While the team is excited about the potential for lightweight and low-power mobile cameras, the researchers acknowledge that this technology comes with a new set of privacy risks.

“As researchers we strongly believe that it’s really important to put things in the public domain so people are aware of the risks and so people can start coming up with solutions to address them,” Gollakota said.

Applications could range from biology to exploring novel environments, the researchers said. The team hopes that future versions of the camera will require even less power and be battery free, potentially solar-powered.

“This is the first time that we’ve had a first-person view from the back of a beetle while it’s walking around. There are so many questions you could explore, such as how does the beetle respond to different stimuli that it sees in the environment?” Iyer said. “But also, insects can traverse rocky environments, which is really challenging for robots to do at this scale. So this system can also help us out by letting us see or collect samples from hard-to-navigate spaces.”

###

Johannes James, a UW mechanical engineering doctoral student, is also a co-author on this paper. This research was funded by a Microsoft fellowship and the National Science Foundation.

I’m surprised there’s no funding from a military agency as the military and covert operation applications seem like an obvious pairing. In any event, here’s a link to and a citation for the paper,

Wireless steerable vision for live insects and insect-scale robots by Vikram Iyer, Ali Najafi, Johannes James, Sawyer Fuller, and Shyamnath Gollakota. Science Robotics 15 Jul 2020: Vol. 5, Issue 44, eabb0839 DOI: 10.1126/scirobotics.abb0839

This paper is behind a paywall.

Video and links

As promised, here’s the video the scientists have released,

These posts feature some fairly ruthless uses of the insects.

  1. The first mention of insects and technology here is in a July 27, 2009 posting titled: Nanotechnology enables robots and human enhancement: part 4. The mention is in the second to last paragraph of the post. Then,.
  2. A November 23, 2011 post titled: Cyborg insects and trust,
  3. A January 9, 2012 post titled: Controlling cyborg insects,
  4. A June 26, 2013 post titled: Steering cockroaches in the lab and in your backyard—cutting edge neuroscience, and, finally,
  5. An April 11, 2014 post titled: Computerized cockroaches as precursors to new healing techniques.

As for my questions (how do you put the backpacks on the beetles? is there a strap, is it glue, is it something else? how heavy is the backpack and camera? how old are the beetles you use for this experiment? where did you get the beetles from? do you have your own beetle farm where you breed them?), I’ll see if I can get some answers.

Smartphone as augmented reality system with software from Brown University

You need to see this,

Amazing, eh? The researchers are scheduled to present this work sometime this week at the ACM Symposium on User Interface Software and Technology (UIST) being held in New Orleans, US, from October 20-23, 2019.

Here’s more about ‘Portal-ble’ in an October 16, 2019 news item on ScienceDaily,

A new software system developed by Brown University [US] researchers turns cell phones into augmented reality portals, enabling users to place virtual building blocks, furniture and other objects into real-world backdrops, and use their hands to manipulate those objects as if they were really there.

The developers hope the new system, called Portal-ble, could be a tool for artists, designers, game developers and others to experiment with augmented reality (AR). The team will present the work later this month at the ACM Symposium on User Interface Software and Technology (UIST 2019) in New Orleans. The source code for Andriod is freely available for download on the researchers’ website, and iPhone code will follow soon.

“AR is going to be a great new mode of interaction,” said Jeff Huang, an assistant professor of computer science at Brown who developed the system with his students. “We wanted to make something that made AR portable so that people could use anywhere without any bulky headsets. We also wanted people to be able to interact with the virtual world in a natural way using their hands.”

An October 16, 2019 Brown University news release (also on EurekAlert), which originated the news item, provides more detail,

Huang said the idea for Portal-ble’s “hands-on” interaction grew out of some frustration with AR apps like Pokemon GO. AR apps use smartphones to place virtual objects (like Pokemon characters) into real-world scenes, but interacting with those objects requires users to swipe on the screen.

“Swiping just wasn’t a satisfying way of interacting,” Huang said. “In the real world, we interact with objects with our hands. We turn doorknobs, pick things up and throw things. So we thought manipulating virtual objects by hand would be much more powerful than swiping. That’s what’s different about Portal-ble.”

The platform makes use of a small infrared sensor mounted on the back of a phone. The sensor tracks the position of people’s hands in relation to virtual objects, enabling users to pick objects up, turn them, stack them or drop them. It also lets people use their hands to virtually “paint” onto real-world backdrops. As a demonstration, Huang and his students used the system to paint a virtual garden into a green space on Brown’s College Hill campus.

Huang says the main technical contribution of the work was developing the right accommodations and feedback tools to enable people to interact intuitively with virtual objects.

“It turns out that picking up a virtual object is really hard if you try to apply real-world physics,” Huang said. “People try to grab in the wrong place, or they put their fingers through the objects. So we had to observe how people tried to interact with these objects and then make our system able accommodate those tendencies.”

To do that, Huang enlisted students in a class he was teaching to come up with tasks they might want to do in the AR world — stacking a set of blocks, for example. The students then asked other people to try performing those tasks using Portal-ble, while recording what people were able to do and what they couldn’t. They could then adjust the system’s physics and user interface to make interactions more successful.

“It’s a little like what happens when people draw lines in Photoshop,” Huang said. “The lines people draw are never perfect, but the program can smooth them out and make them perfectly straight. Those were the kinds of accommodations we were trying to make with these virtual objects.”

The team also added sensory feedback — visual highlights on objects and phone vibrations — to make interactions easier. Huang said he was somewhat surprised that phone vibrations helped users to interact. Users feel the vibrations in the hand they’re using to hold the phone, not in the hand that’s actually grabbing for the virtual object. Still, Huang said, vibration feedback still helped users to more successfully interact with objects.

In follow-up studies, users reported that the accommodations and feedback used by the system made tasks significantly easier, less time-consuming and more satisfying.

Huang and his students plan to continue working with Portal-ble — expanding its object library, refining interactions and developing new activities. They also hope to streamline the system to make it run entirely on a phone. Currently the infrared sensor requires an infrared sensor and external compute stick for extra processing power.

Huang hopes people will download the freely available source code and try it for themselves. 
“We really just want to put this out there and see what people do with it,” he said. “The code is on our website for people to download, edit and build off of. It will be interesting to see what people do with it.

Co-authors on the research paper were Jing Qian, Jiaju Ma, Xiangyu Li, Benjamin Attal, Haoming Lai, James Tompkin and John Hughes. The work was supported by the National Science Foundation (IIS-1552663) and by a gift from Pixar.

You can find the conference paper here on jeffhuang.com,

Portal-ble: Intuitive Free-hand Manipulationin Unbounded Smartphone-based Augmented Reality by Jing Qian, Jiaju Ma, Xiangyu Li∗, Benjamin Attal, Haoming Lai,James Tompkin, John F. Hughes, Jeff Huang. Brown University, Providence RI, USA; Southeast University, Nanjing, China. Presented at ACM Symposium on User Interface Software and Technology (UIST) being held in New Orleans, US

This is the first time I’ve seen an augmented reality system that seems accessible, i.e., affordable. You can find out more on the Portal-ble ‘resource’ page where you’ll also find a link to the source code repository. The researchers, as noted in the news release, have an Android version available now with an iPhone version to be released in the future.

Colo(u)r-changing nanolaser inspired by chameleons

Caption: Novel nanolaser leverages the same color-changing mechanism that a chameleon uses to camouflage its skin. Credit: Egor Kamelev Courtesy: Northwestern University

I wish there was some detail included about how those colo(u)rs were achieved in that photograph. Strangely, Northwestern University (Chicago, Illinois, US) is more interested in describing the technology that chameleons have inspired. A June 20, 2018 news item on ScienceDaily announces the research,

As a chameleon shifts its color from turquoise to pink to orange to green, nature’s design principles are at play. Complex nano-mechanics are quietly and effortlessly working to camouflage the lizard’s skin to match its environment.

Inspired by nature, a Northwestern University team has developed a novel nanolaser that changes colors using the same mechanism as chameleons. The work could open the door for advances in flexible optical displays in smartphones and televisions, wearable photonic devices and ultra-sensitive sensors that measure strain.

A June 20, 2018 Northwestern University news release (also on EurekAlert) by Amanda Morris, which originated the news item, expands on the theme,

“Chameleons can easily change their colors by controlling the spacing among the nanocrystals on their skin, which determines the color we observe,” said Teri W. Odom, Charles E. and Emma H. Morrison Professor of Chemistry in Northwestern’s Weinberg College of Arts and Sciences. “This coloring based on surface structure is chemically stable and robust.”

The research was published online yesterday [June 19, 2018] in the journal Nano Letters. Odom, who is the associate director of Northwestern’s International Institute of Nanotechnology, and George C. Schatz, Charles E. and Emma H. Morrison Professor of Chemistry in Weinberg, served as the paper’s co-corresponding authors.

The same way a chameleon controls the spacing of nanocrystals on its skin, the Northwestern team’s laser exploits periodic arrays of metal nanoparticles on a stretchable, polymer matrix. As the matrix either stretches to pull the nanoparticles farther apart or contracts to push them closer together, the wavelength emitted from the laser changes wavelength, which also changes its color.

“Hence, by stretching and releasing the elastomer substrate, we could select the emission color at will,” Odom said.

The resulting laser is robust, tunable, reversible and has a high sensitivity to strain. These properties are critical for applications in responsive optical displays, on-chip photonic circuits and multiplexed optical communication.

Here’s a link to and a citation for the paper,

Stretchable Nanolasing from Hybrid Quadrupole Plasmons by Danqing Wang, Marc R. Bourgeois, Won-Kyu Lee, Ran Li, Dhara Trivedi, Michael P. Knudson, Weijia Wang, George C. Schatz, and Teri W. Odom. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.8b01774 Publication Date (Web): June 18, 2018

Copyright © 2018 American Chemical Society

This paper is behind a paywall.

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

Shades of the Nokia Morph: a smartphone than conforms to your wrist

A March 16, 2017 news item on Nanowerk brought back some memories for me,

Some day, your smartphone might completely conform to your wrist, and when it does, it might be covered in pure gold, thanks to researchers at Missouri University of Science and Technology.

Nokia, a Finnish telecommunications company, was promoting its idea for a smartphone ‘and more’ that could be worn around your wrist in a concept called the Morph. It was introduced in 2008 at the Museum of Modern Art in New York City (see my March 20, 2010 posting for one of my last updates on this moribund project). Here’s Nokia’s Morph video (almost 6 mins.),

Getting back to the present day, here’s what the Missouri researchers are working on,

An example of a gold foil peeled from single crystal silicon. Reprinted with permission from Naveen Mahenderkar et al., Science [355]:[1203] (2017)

A March 16, 2017 Missouri University of Science and Technology news release, by Greg Katski, which originated the news item, provides more details about this Missouri version (Note: A link has been removed),

Writing in the March 17 [2017] issue of the journal Science, the S&T researchers say they have developed a way to “grow” thin layers of gold on single crystal wafers of silicon, remove the gold foils, and use them as substrates on which to grow other electronic materials. The research team’s discovery could revolutionize wearable or “flexible” technology research, greatly improving the versatility of such electronics in the future.

According to lead researcher Jay A. Switzer, the majority of research into wearable technology has been done using polymer substrates, or substrates made up of multiple crystals. “And then they put some typically organic semiconductor on there that ends up being flexible, but you lose the order that (silicon) has,” says Switzer, Donald L. Castleman/FCR Endowed Professor of Discovery in Chemistry at S&T.

Because the polymer substrates are made up of multiple crystals, they have what are called grain boundaries, says Switzer. These grain boundaries can greatly limit the performance of an electronic device.

“Say you’re making a solar cell or an LED,” he says. “In a semiconductor, you have electrons and you have holes, which are the opposite of electrons. They can combine at grain boundaries and give off heat. And then you end up losing the light that you get out of an LED, or the current or voltage that you might get out of a solar cell.”

Most electronics on the market are made of silicon because it’s “relatively cheap, but also highly ordered,” Switzer says.

“99.99 percent of electronics are made out of silicon, and there’s a reason – it works great,” he says. “It’s a single crystal, and the atoms are perfectly aligned. But, when you have a single crystal like that, typically, it’s not flexible.”

By starting with single crystal silicon and growing gold foils on it, Switzer is able to keep the high order of silicon on the foil. But because the foil is gold, it’s also highly durable and flexible.

“We bent it 4,000 times, and basically the resistance didn’t change,” he says.

The gold foils are also essentially transparent because they are so thin. According to Switzer, his team has peeled foils as thin as seven nanometers.

Switzer says the challenge his research team faced was not in growing gold on the single crystal silicon, but getting it to peel off as such a thin layer of foil. Gold typically bonds very well to silicon.

“So we came up with this trick where we could photo-electrochemically oxidize the silicon,” Switzer says. “And the gold just slides off.”

Photoelectrochemical oxidation is the process by which light enables a semiconductor material, in this case silicon, to promote a catalytic oxidation reaction.

Switzer says thousands of gold foils—or foils of any number of other metals—can be made from a single crystal wafer of silicon.

The research team’s discovery can be considered a “happy accident.” Switzer says they were looking for a cheap way to make single crystals when they discovered this process.

“This is something that I think a lot of people who are interested in working with highly ordered materials like single crystals would appreciate making really easily,” he says. “Besides making flexible devices, it’s just going to open up a field for anybody who wants to work with single crystals.”

Here’s a link to and a citation for the paper,

Epitaxial lift-off of electrodeposited single-crystal gold foils for flexible electronics by Naveen K. Mahenderkar, Qingzhi Chen, Ying-Chau Liu, Alexander R. Duchild, Seth Hofheins, Eric Chason, Jay A. Switzer. Science  17 Mar 2017: Vol. 355, Issue 6330, pp. 1203-1206 DOI: 10.1126/science.aam5830

This paper is behind a paywall.

Smartphone battery inspired by your guts?

The conversion of bacteria from an enemy to be vanquished at all costs to a ‘frenemy’, a friendly enemy supplying possible solutions for problems is fascinating. An Oct. 26, 2016 news item on Nanowerk falls into the ‘frenemy’ camp,

A new prototype of a lithium-sulphur battery – which could have five times the energy density of a typical lithium-ion battery – overcomes one of the key hurdles preventing their commercial development by mimicking the structure of the cells which allow us to absorb nutrients.

Researchers have developed a prototype of a next-generation lithium-sulphur battery which takes its inspiration in part from the cells lining the human intestine. The batteries, if commercially developed, would have five times the energy density of the lithium-ion batteries used in smartphones and other electronics.

An Oct. 26, 2016 University of Cambridge press release (also on EurekAlert), which originated the news item, expands on the theme and provides some good explanations of how lithium-ion batteries and lithium-sulphur batteries work (Note: A link has been removed),

The new design, by researchers from the University of Cambridge, overcomes one of the key technical problems hindering the commercial development of lithium-sulphur batteries, by preventing the degradation of the battery caused by the loss of material within it. The results are reported in the journal Advanced Functional Materials.

Working with collaborators at the Beijing Institute of Technology, the Cambridge researchers based in Dr Vasant Kumar’s team in the Department of Materials Science and Metallurgy developed and tested a lightweight nanostructured material which resembles villi, the finger-like protrusions which line the small intestine. In the human body, villi are used to absorb the products of digestion and increase the surface area over which this process can take place.

In the new lithium-sulphur battery, a layer of material with a villi-like structure, made from tiny zinc oxide wires, is placed on the surface of one of the battery’s electrodes. This can trap fragments of the active material when they break off, keeping them electrochemically accessible and allowing the material to be reused.

“It’s a tiny thing, this layer, but it’s important,” said study co-author Dr Paul Coxon from Cambridge’s Department of Materials Science and Metallurgy. “This gets us a long way through the bottleneck which is preventing the development of better batteries.”

A typical lithium-ion battery is made of three separate components: an anode (negative electrode), a cathode (positive electrode) and an electrolyte in the middle. The most common materials for the anode and cathode are graphite and lithium cobalt oxide respectively, which both have layered structures. Positively-charged lithium ions move back and forth from the cathode, through the electrolyte and into the anode.

The crystal structure of the electrode materials determines how much energy can be squeezed into the battery. For example, due to the atomic structure of carbon, each carbon atom can take on six lithium ions, limiting the maximum capacity of the battery.

Sulphur and lithium react differently, via a multi-electron transfer mechanism meaning that elemental sulphur can offer a much higher theoretical capacity, resulting in a lithium-sulphur battery with much higher energy density. However, when the battery discharges, the lithium and sulphur interact and the ring-like sulphur molecules transform into chain-like structures, known as a poly-sulphides. As the battery undergoes several charge-discharge cycles, bits of the poly-sulphide can go into the electrolyte, so that over time the battery gradually loses active material.

The Cambridge researchers have created a functional layer which lies on top of the cathode and fixes the active material to a conductive framework so the active material can be reused. The layer is made up of tiny, one-dimensional zinc oxide nanowires grown on a scaffold. The concept was trialled using commercially-available nickel foam for support. After successful results, the foam was replaced by a lightweight carbon fibre mat to reduce the battery’s overall weight.

“Changing from stiff nickel foam to flexible carbon fibre mat makes the layer mimic the way small intestine works even further,” said study co-author Dr Yingjun Liu.

This functional layer, like the intestinal villi it resembles, has a very high surface area. The material has a very strong chemical bond with the poly-sulphides, allowing the active material to be used for longer, greatly increasing the lifespan of the battery.

“This is the first time a chemically functional layer with a well-organised nano-architecture has been proposed to trap and reuse the dissolved active materials during battery charging and discharging,” said the study’s lead author Teng Zhao, a PhD student from the Department of Materials Science & Metallurgy. “By taking our inspiration from the natural world, we were able to come up with a solution that we hope will accelerate the development of next-generation batteries.”

For the time being, the device is a proof of principle, so commercially-available lithium-sulphur batteries are still some years away. Additionally, while the number of times the battery can be charged and discharged has been improved, it is still not able to go through as many charge cycles as a lithium-ion battery. However, since a lithium-sulphur battery does not need to be charged as often as a lithium-ion battery, it may be the case that the increase in energy density cancels out the lower total number of charge-discharge cycles.

“This is a way of getting around one of those awkward little problems that affects all of us,” said Coxon. “We’re all tied in to our electronic devices – ultimately, we’re just trying to make those devices work better, hopefully making our lives a little bit nicer.”

Here’s a link to and a citation for the paper,

Advanced Lithium–Sulfur Batteries Enabled by a Bio-Inspired Polysulfide Adsorptive Brush by Teng Zhao, Yusheng Ye, Xiaoyu Peng, Giorgio Divitini, Hyun-Kyung Kim, Cheng-Yen Lao, Paul R. Coxon, Kai Xi, Yingjun Liu, Caterina Ducati, Renjie Chen, R. Vasant Kumar. Advanced Functional Materials DOI: 10.1002/adfm.201604069 First published: 26 October 2016

This paper is behind a paywall.

Caption: This is a computer visualization of villi-like battery material. Credit: Teng Zhao

Caption: This is a computer visualization of villi-like battery material. Credit: Teng Zhao

The volatile lithium-ion battery

On the heels of Samsung’s Galaxy Note 7 recall due to fires (see Alex Fitzpatrick’s Sept. 9, 2016 article for Time magazine for a good description of lithium-ion batteries and why they catch fire; see my May 29, 2013 posting on lithium-ion batteries, fires [including the airplane fires], and nanotechnology risk assessments), there’s new research on lithium-ion batteries and fires from China. From an Oct. 21, 2016 news item on Nanotechnology Now,

Dozens of dangerous gases are produced by the batteries found in billions of consumer devices, like smartphones and tablets, according to a new study. The research, published in Nano Energy, identified more than 100 toxic gases released by lithium batteries, including carbon monoxide.

An Oct. 20, 2016 Elsevier Publishing press release (also on EurekAlert), which originated the news item, expands on the theme,

The gases are potentially fatal, they can cause strong irritations to the skin, eyes and nasal passages, and harm the wider environment. The researchers behind the study, from the Institute of NBC Defence and Tsinghua University in China, say many people may be unaware of the dangers of overheating, damaging or using a disreputable charger for their rechargeable devices.

In the new study, the researchers investigated a type of rechargeable battery, known as a “lithium-ion” battery, which is placed in two billion consumer devices every year.

“Nowadays, lithium-ion batteries are being actively promoted by many governments all over the world as a viable energy solution to power everything from electric vehicles to mobile devices. The lithium-ion battery is used by millions of families, so it is imperative that the general public understand the risks behind this energy source,” explained Dr. Jie Sun, lead author and professor at the Institute of NBC Defence.

The dangers of exploding batteries have led manufacturers to recall millions of devices: Dell recalled four million laptops in 2006 and millions of Samsung Galaxy Note 7 devices were recalled this month after reports of battery fires. But the threats posed by toxic gas emissions and the source of these emissions are not well understood.

Dr. Sun and her colleagues identified several factors that can cause an increase in the concentration of the toxic gases emitted. A fully charged battery will release more toxic gases than a battery with 50 percent charge, for example. The chemicals contained in the batteries and their capacity to release charge also affected the concentrations and types of toxic gases released.

Identifying the gases produced and the reasons for their emission gives manufacturers a better understanding of how to reduce toxic emissions and protect the wider public, as lithium-ion batteries are used in a wide range of environments.

“Such dangerous substances, in particular carbon monoxide, have the potential to cause serious harm within a short period of time if they leak inside a small, sealed environment, such as the interior of a car or an airplane compartment,” Dr. Sun said.

Almost 20,000 lithium-ion batteries were heated to the point of combustion in the study, causing most devices to explode and all to emit a range of toxic gases. Batteries can be exposed to such temperature extremes in the real world, for example, if the battery overheats or is damaged in some way.

The researchers now plan to develop this detection technique to improve the safety of lithium-ion batteries so they can be used to power the electric vehicles of the future safely.

“We hope this research will allow the lithium-ion battery industry and electric vehicle sector to continue to expand and develop with a greater understanding of the potential hazards and ways to combat these issues,” Sun concluded.

Here’s a link to and a citation for the paper,

Toxicity, a serious concern of thermal runaway from commercial Li-ion battery by Jie Sun, Jigang Li, Tian Zhou, Kai Yang, Shouping Wei, Na Tang, Nannan Dang, Hong Li, Xinping Qiu, Liquan Chend. Nano Energy Volume 27, September 2016, Pages 313–319  http://dx.doi.org/10.1016/j.nanoen.2016.06.031

This paper appears to be open access.

Touchless displays with 2D nanosheets and sweat

Swiping touchscreens with your finger has become a dominant means of accessing information in many applications but there is at least one problem associated with this action. From an Oct. 2, 2015 news item on phys.org,

While touchscreens are practical, touchless displays would be even more so. That’s because, despite touchscreens having enabled the smartphone’s advance into our lives and being essential for us to be able to use cash dispensers or ticket machines, they do have certain disadvantages. Touchscreens suffer from mechanical wear over time and are a transmission path for bacteria and viruses. To avoid these problems, scientists at Stuttgart’s Max Planck Institute for Solid State Research and LMU Munich have now developed nanostructures that change their electrical and even their optical properties as soon as a finger comes anywhere near them.

Here’s what a touchless screen looks like when tracking,

Touchless colour change: A nanostructure containing alternating layers of phosphatoantimonate nanosheets and oxide ... [more] © Advanced Materials 2015/MPI for Solid State Research

Touchless colour change: A nanostructure containing alternating layers of phosphatoantimonate nanosheets and oxide … [more]
© Advanced Materials 2015/MPI for Solid State Research

An Oct. 1, 2015 Max Planck Institute press release, which originated the news item, gives technical details,

A touchless display may be able to capitalize on a human trait which is of vital importance, although sometimes unwanted: This is the fact that our body sweats – and is constantly emitting water molecules through tiny pores in the skin. Scientists of the Nanochemistry group led by Bettina Lotsch at the Max Planck Institute for Solid State Research in Stuttgart and the LMU Munich have now been able to visualize the transpiration of a finger with a special moisture sensor which reacts as soon as an object – like an index finger – approaches its surface, without touching it. The increasing humidity is converted into an electrical signal or translated into a colour change, thus enabling it to be measured.

Phosphatoantimonic acid is what enables it to do this. This acid is a crystalline solid at room temperature with a structure made up of antimony, phosphorous, oxygen and hydrogen atoms. “It’s long been known to scientists that this material is able to take up water and swells considerably in the process,” explained Pirmin Ganter, doctoral student at the Max Planck Institute for Solid State Research and the Chemistry Department at LMU Munich. This water uptake also changes the properties of the material. For instance, its electrical conductivity increases as the number of stored water molecules rises. This is what enables it to serve as a measure of ambient moisture.

A sandwich nanomaterial structure exposed to moisture also changes its colour

However, the scientists aren’t so interested in developing a new moisture sensor. What they really want is to use it in touchless displays. “Because these sensors react in a very local manner to any increase in moisture, it is quite conceivable that this sort of material with moisture-dependent properties could also be used for touchless displays and monitors,” said Ganter. Touchless screens of this kind would require nothing more than a finger to get near the display to change their electrical or optical properties – and with them the input signal – at a specific point on the display.

Taking phosphatoantimonate nanosheets as their basis, the Stuttgart scientists then developed a photonic nanostructure which reacts to the moisture by changing colour. “If this was built into a monitor, the users would then receive visible feedback to  their finger motion” explained Katalin Szendrei, also a doctoral student in Bettina Lotsch’s group. To this end, the scientists created a multilayer sandwich material with alternating layers of ultrathin phosphatoantimonate nanosheets and silicon dioxide (SiO2) or titanium dioxide nanoparticles (TiO2). Comprising more than ten layers, the stack ultimately reached a height of little more than one millionth of a metre.

For one thing, the colour of the sandwich material can be set via the thickness of the layers. And for another, the colour of the sandwich changes if the scientists increase the relative humidity in the immediate surroundings of the material, for instance by moving a finger towards the screen. “The reason for this lies in the storage of water molecules between the phosphatoantimonate layers, which makes the layers swell considerably,” explained Katalin Szendrei. “A change in the thickness of the layers in this process is accompanied by a change in the colour of the sensor – produced in a similar way to what gives colour to a butterfly wing or in mother-of-pearl.”

The material reacts to the humidity change within a few milliseconds

This is a property that is fundamentally well known and characteristic of so-called photonic crystals. But scientists had never before observed such a large colour change as they now have in the lab in Stuttgart. “The colour of the nanostructure turns from blue to red when a finger gets near, for example. In this way, the colour can be tuned through the whole of the visible spectrum depending on the amount of water vapour taken up,” stressed Bettina Lotsch.

The scientists’ new approach is not only captivating because of the striking colour change. What’s also important is the fact that the material reacts to the change in humidity within a few milliseconds – literally in the blink of an eye. Previously reported materials normally took several seconds or more to respond. That is much too slow for practical applications. And there’s another thing that other materials couldn’t always do: The sandwich structure consisting of phosphatoantimonate nanosheets and oxide nanoparticles is highly stable from a chemical perspective and responds selectively to water vapour.

A layer protecting against chemical influences has to let moisture through

The scientists can imagine their materials being used in much more than just future generations of smartphones, tablets or notebooks. “Ultimately, we could see touchless displays also being deployed in many places where people currently have to touch monitors to navigate,” said Bettina Lotsch. For instance in cash dispensers or ticket machines, or even at the weighing scales in the supermarket’s vegetable aisle. Displays in public placesthat are used by many different people would have distinct hygiene benefits if they were touchless.

But before we see them being used in such places, the scientists have a few more challenges to overcome. It’s important, for example, that the nanostructures can be produced economically. To minimize wear, the structures still need to be coated with a protective layer if they’re going to be used in anything like a display. And that, again, has to meet not one but two different requirements: It must protect the moisture-sensitive layers against chemical and mechanical influences. And it must, of course, let the moisture pass through. But the Stuttgart scientists have an idea for how to achieve that already. An idea they are currently starting to put into practice with an additional cooperation partner on board.

Dexter Johnson’s Oct. 2, 2015 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides some additional context for this research (Note: A link has been removed),

In a world where the “swipe” has become a dominant computer interface method along with moving and clicking the mouse, the question becomes what’s next? For researchers at Stuttgart’s Max Planck Institute for Solid State Research and LMU Munich, Germany, the answer continues to be a swipe, but one in which you don’t actually need to touch the screen with your finger. Researchers call these no-contact computer screens touchless positioning interfaces (TPI).

Here’s a link to and a citation for the paper,

Touchless Optical Finger Motion Tracking Based on 2D Nanosheets with Giant Moisture Responsiveness by Katalin Szendrei, Pirmin Ganter, Olalla Sànchez-Sobrado, Roland Eger, Alexander Kuhn, and Bettina V. Lotsch. Advanced Materials DOI: 10.1002/adma.201503463 Article first published online: 22 SEP 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Café Scientifique (Vancouver, Canada) makes a ‘happy’ change: new speaker for April 28, 2015

For the first time since I’ve started posting about Vancouver’s Café Scientifique there’s been a last minute change of speakers. It’s due to an addition to Dr. Kramer’s family. Congratulations!

So, Tuesday, April 28, 2015’s  Café Scientifique, held in the back room of The Railway Club (2nd floor of 579 Dunsmuir St. [at Seymour St.], will be hosting a talk from a different speaker and on a different topic,

Ph.D candidate and Vanier Scholar, Kostadin Kushlev from the Department of Psychology at UBC presenting his exciting research. Details are as follows:

Always Connected: How Smartphones May be Disconnecting Us From the People Around Us.

Smartphones have transformed where and how we access information and connect with our family and friends. But how might these powerful pocket computers be affecting how and when we interact with others in person? In this talk, I will present recent data from our lab suggesting that smartphones can compromise how connected we feel to close others, peers, and strangers. Parents spending time with their children felt more distracted and less socially connected when they used their phones a lot. Peers waiting together for an appointment connected with each other less and felt less happy when they had access to their phones as compared to when they did not. And, people looking for directions trusted members of their community less when they relied on their phones for directions rather than on the kindness of strangers. These findings highlight some of the perils of being constantly connected for our nonvirtual social lives and for the social fabric of society more generally.

On looking up the speaker online, I found that the main focus of his research is happiness, from the University of British Columbia’s (UBC) Graduate and PostGraduate webpage for Kostadin Kushlev,

 Research topic: Happiness and well-being
Research group: Social Cognition and Emotion Lab
Research location: UBC Vancouver, Kenny Building, 2136 West Mall
Research supervisor: Elizabeth Dunn

Research description
My research focuses on the emotional experience of people. The topics that I am currently investigating range from what gives (or takes away from) people’s experience of meaning in life to how people react to shame and guilt, and to what extent new technologies introduce stress and anxiety in our lives.

Home town: Madan
Country: Bulgaria

Given that the United Nations’ 2015 World Happiness Report (co-authored by UBC professor emeritus John Helliwell) was released on April 23, 2015,  the same day that the Museum of Vancouver’s The Happy Show (Stefan Sagmeister: The Happy Show) opened, Kostadin Kushlev seems like a ‘happy’ choice for a substitute speaker just days later on April 28, 2015, especially since the original topic was ‘pain’.

Glasswing butterflies teach us about reflection

Contrary to other transparent surfaces, the wings of the glasswing butterfly (Greta Oto) hardly reflect any light. Lenses or displays of mobiles might profit from the investigation of this phenomenon. (Photo: Radwanul Hasan Siddique, KIT)

Contrary to other transparent surfaces, the wings of the glasswing butterfly (Greta Oto) hardly reflect any light. Lenses or displays of mobiles might profit from the investigation of this phenomenon. (Photo: Radwanul Hasan Siddique, KIT)

I wouldn’t have really believed. Other than glass, I’ve never seen anything in nature that’s as transparent and distortion-free as this butterfly’s wings.

An April 22, 2015 news item on ScienceDaily provides more information about the butterfly,

The effect is known from the smart phone: Sun is reflected by the display and hardly anything can be seen. In contrast to this, the glasswing butterfly hardly reflects any light in spite of its transparent wings. As a result, it is difficult for predatory birds to track the butterfly during the flight. Researchers of KIT under the direction of Hendrik Hölscher found that irregular nanostructures on the surface of the butterfly wing cause the low reflection. In theoretical experiments, they succeeded in reproducing the effect that opens up fascinating application options, e.g. for displays of mobile phones or laptops.

An April 22, 2015 Karlsruhe Institute of Technology (KIT) press release (also on EurekAlert), which originated the news item, explains the scientific interest,

Transparent materials such as glass, always reflect part of the incident light. Some animals with transparent surfaces, such as the moth with its eyes, succeed in keeping the reflections small, but only when the view angle is vertical to the surface. The wings of the glasswing butterfly that lives mainly in Central America, however, also have a very low reflection when looking onto them under higher angles. Depending on the view angle, specular reflection varies between two and five percent. For comparison: As a function of the view angle, a flat glass plane reflects between eight and 100 percent, i.e. reflection exceeds that of the butterfly wing by several factors. Interestingly, the butterfly wing does not only exhibit a low reflection of the light spectrum visible to humans, but also suppresses the infrared and ultraviolet radiation that can be perceived by animals. This is important to the survival of the butterfly.

For research into this so far unstudied phenomenon, the scientists examined glasswings by scanning electron microscopy. Earlier studies revealed that regular pillar-like nanostructures are responsible for the low reflections of other animals. The scientists now also found nanopillars on the butterfly wings. In contrast to previous findings, however, they are arranged irregularly and feature a random height. Typical height of the pillars varies between 400 and 600 nanometers, the distance of the pillars ranges between 100 and 140 nanometers. This corresponds to about one thousandth of a human hair.

In simulations, the researchers mathematically modeled this irregularity of the nanopillars in height and arrangement. They found that the calculated reflected amount of light exactly corresponds to the observed amount at variable view angles. In this way, they proved that the low reflection at variable view angles is caused by this irregularity of the nanopillars. Hölscher’s doctoral student Radwanul Hasan Siddique, who discovered this effect, considers the glasswing butterfly a fascinating animal: “Not only optically with its transparent wings, but also scientifically. In contrast to other natural phenomena, where regularity is of top priority, the glasswing butterfly uses an apparent chaos to reach effects that are also fascinating for us humans.”

The findings open up a range of applications wherever low-reflection surfaces are needed, for lenses or displays of mobile phones, for instance. Apart from theoretical studies of the phenomenon, the infrastructure of the Institute of Microstructure Technology also allows for practical implementation. First application tests are in the conception phase at the moment. Prototype experiments, however, already revealed that this type of surface coating also has a water-repellent and self-cleaning effect.

Here’s a link to and a citation for the paper,

The role of random nanostructures for the omnidirectional anti-reflection properties of the glasswing butterfly by Radwanul Hasan Siddique, Guillaume Gomard, & Hendrik Hölscher. Nature Communications 6, Article number: 6909 doi:10.1038/ncomms7909 Published 22 April 2015

The paper is behind a paywall but there is a free preview via ReadCube Access.