Tag Archives: Eye

Do artists see colour at the nanoscale? It would seem so

I’ve wondered how Japanese artists of the 16th to 18th centuries were able to beat gold down to the nanoscale for application to screens. How could they see what they were doing? I may have an answer at last. According to some new research, it seems that the human eye can detect colour at the nanoscale.

Before getting to the research, here’s the Namban screen story.

Japanese Namban Screen. ca. 1550. In Portugal-Japão: 450 anos de memórias. Embaixada de Portugal no Japão, 1993. [downloaded from http://www.indiana.edu/~liblilly/digital/exhibitions/exhibits/show/portuguese-speaking-diaspora/china-and-japan]

Japanese Namban Screen. ca. 1550. In Portugal-Japão: 450 anos de memórias. Embaixada de Portugal no Japão, 1993. [downloaded from http://www.indiana.edu/~liblilly/digital/exhibitions/exhibits/show/portuguese-speaking-diaspora/china-and-japan]

This image is from an Indiana University at Bloomington website featuring a page titled, Portuguese-Speaking Diaspora,

A detail from one of four large folding screens on display in the Museu de Arte Antiga in Lisbon. Namban was the word used to refer to Portuguese traders who, in this scene, are dressed in colorful pantaloons and accompanied by African slaves. Jesuits appear in black robes, while the Japanese observe the newcomers from inside their home. The screen materials included gold-covered copper and paper, tempera paint, silk, and lacquer.

Copyright © 2015 The Trustees of Indiana University

Getting back to the Japanese artists, here’s how their work was described in a July 2, 2014 Springer press release on EurekAlert,

Ancient Japanese gold leaf artists were truly masters of their craft. An analysis of six ancient Namban paper screens show that these artifacts are gilded with gold leaf that was hand-beaten to the nanometer scale. [emphasis mine] Study leader Sofia Pessanha of the Atomic Physics Center of the University of Lisbon in Portugal believes that the X-ray fluorescence technique her team used in the analysis could also be used to date other artworks without causing any damage to them. The results are published in Springer’s journal Applied Physics A: Materials Science & Processing.

Gold leaf refers to a very thin sheet made from a combination of gold and other metals. It has almost no weight and can only be handled by specially designed tools. Even though the ancient Egyptians were probably the first to gild artwork with it, the Japanese have long been credited as being able to produce the thinnest gold leaf in the world. In Japanese traditional painting, decorating with gold leaf is named Kin-haku, and the finest examples of this craft are the Namban folding screens, or byobu. These were made during the late Momoyama (around 1573 to 1603) and early Edo (around 1603 to 1868) periods.

Pessanha’s team examined six screens that are currently either part of a museum collection or in a private collection in Portugal. Four screens belong to the Momoyama period, and two others were decorated during the early Edo period. The researchers used various X-ray fluorescence spectroscopy techniques to test the thickness and characteristics of the gold layers. The method is completely non-invasive, no samples needed to be taken, and therefore the artwork was not damaged in any way. Also, the apparatus needed to perform these tests is portable and can be done outside of a laboratory.

The gilding was evaluated by taking the attenuation or weakening of the different characteristic lines of gold leaf layers into account. The methodology was tested to be suitable for high grade gold alloys with a maximum of 5 percent influence of silver, which is considered negligible.

The two screens from the early Edo period were initially thought to be of the same age. However, Pessanha’s team found that gold leaf on a screen kept at Museu Oriente in Lisbon was thinner, hence was made more recently. This is in line with the continued development of the gold beating techniques carried out in an effort to obtain ever thinner gold leaf.

So, how did these artists beat gold leaf down to the nanoscale and then use the sheets in their art work? This July 10, 2015 news item on Azonano may help to answer that question,

The human eye is an amazing instrument and can accurately distinguish between the tiniest, most subtle differences in color. Where human vision excels in one area, it seems to fall short in others, such as perceiving minuscule details because of the natural limitations of human optics.

In a paper published today in The Optical Society’s new, high-impact journal Optica, a research team from the University of Stuttgart, Germany and the University of Eastern Finland, Joensuu, Finland, has harnessed the human eye’s color-sensing strengths to give the eye the ability to distinguish between objects that differ in thickness by no more than a few nanometers — about the thickness of a cell membrane or an individual virus.

A July 9, 2015 Optical Society news release (also on EurkeAlert), which originated the news item, provides more details,

This ability to go beyond the diffraction limit of the human eye was demonstrated by teaching a small group of volunteers to identify the remarkably subtle color differences in light that has passed through thin films of titanium dioxide under highly controlled and precise lighting conditions. The result was a remarkably consistent series of tests that revealed a hitherto untapped potential, one that rivals sophisticated optics tools that can measure such minute thicknesses, such as ellipsometry.

“We were able to demonstrate that the unaided human eye is able to determine the thickness of a thin film — materials only a few nanometers thick — by simply observing the color it presents under specific lighting conditions,” said Sandy Peterhänsel, University of Stuttgart, Germany and principal author on the paper. The actual testing was conducted at the University of Eastern Finland.

The Color and Thickness of Thin Films

Thin films are essential for a variety of commercial and manufacturing applications, including anti-reflective coatings on solar panels. These films can be as small as a few to tens of nanometers thick. The thin films used in this experiment were created by applying layer after layer of single atoms on a surface. Though highly accurate, this is a time-consuming procedure and other techniques like vapor deposition are used in industry.

The optical properties of thin films mean that when light interacts with their surfaces it produces a wide range of colors. This is the same phenomenon that produces scintillating colors in soap bubble and oil films on water.

The specific colors produced by this process depend strongly on the composition of the material, its thickness, and the properties of the incoming light. This high sensitivity to both the material and thickness has sometimes been used by skilled engineers to quickly estimate the thickness of films down to a level of approximately 10-20 nanometers.

This observation inspired the research team to test the limits of human vision to see how small of a variation could be detected under ideal conditions.

“Although the spatial resolving power of the human eye is orders of magnitude too weak to directly characterize film thicknesses, the interference colors are well known to be very sensitive to variations in the film,” said Peterhänsel.

Experimental Setup

The setup for this experiment was remarkably simple. A series of thin films of titanium dioxide were manufactured one layer at a time by atomic deposition. While time consuming, this method enabled the researchers to carefully control the thickness of the samples to test the limitations of how small a variation the research subjects could identify.

The samples were then placed on a LCD monitor that was set to display a pure white color, with the exception of a colored reference area that could be calibrated to match the apparent surface colors of the thin films with various thicknesses.

The color of the reference field was then changed by the test subject until it perfectly matched the reference sample: correctly identifying the color meant they also correctly determined its thickness. This could be done in as little as two minutes, and for some samples and test subjects their estimated thickness differed only by one-to-three nanometers from the actual value measured by conventional means. This level of precision is far beyond normal human vision.

Compared to traditional automated methods of determining the thickness of a thin film, which can take five to ten minutes per sample using some techniques, the human eye performance compared very favorably.

Since human eyes tire very easily, this process is unlikely to replace automated methods. It can, however, serve as a quick check by an experienced technician. “The intention of our study never was solely to compare the human color vision to much more sophisticated methods,” noted Peterhänsel. “Finding out how precise this approach can be was the main motivation for our work.”

The researchers speculate that it may be possible to detect even finer variations if other control factors are put in place. “People often underestimate human senses and their value in engineering and science. This experiment demonstrates that our natural born vision can achieve exceptional tasks that we normally would only assign to expensive and sophisticated machinery,” concludes Peterhänsel.

Here’s a link to and a citation for the paper,

Human color vision provides nanoscale accuracy in thin-film thickness characterization by Sandy Peterhänsel, Hannu Laamanen, Joonas Lehtolahti, Markku Kuittinen, Wolfgang Osten, and Jani Tervo. Optica Vol. 2, Issue 7, pp. 627-630 (2015) •doi: 10.1364/OPTICA.2.000627

This article appears to be open access.

It would seem that the artists creating the Namban screens exploited the ability to see at the nanoscale, which leads me to  wonder how many people who work with color/colour all the time such as visual artists, interior designers, graphic designers, printers, and more can perceive at the nanoscale. These German and Finnish researchers may want to work with some of these professionals in their next study.

Almost Human (tv series), smartphones, and anxieties about life/nonlife

The US-based Fox Broadcasting Company is set to premiere a new futuristic television series, Almost Human, over two nights, Nov. 17, and 18, 2013 for US and Canadian viewers. Here’s a description of the premise from its Wikipedia essay (Note: Links have been removed),

The series is set thirty-five years in the future when humans in the Los Angeles Police Department are paired up with lifelike androids; a detective who has a dislike for robots partners with an android capable of emotion.

One of the showrunners, Naren Shankar, seems to have also been functioning both as a science consultant and as a crime writing consultant,in addition to his other duties. From a Sept. 4, 2013 article by Lisa Tsering for Indiawest.com,

FOX is the latest television network to utilize the formidable talents of Naren Shankar, an Indian American writer and producer best known to fans for his work on “Star Trek: Deep Space Nine,” “Star Trek: Voyager” and “Star Trek: The Next Generation” as well as “Farscape,” the recently cancelled ABC series “Zero Hour” and “The Outer Limits.”

Set 35 years in the future, “Almost Human” stars Karl Urban and Michael Ealy as a crimefighting duo of a cop who is part-machine and a robot who is part-human. [emphasis mine]

“We are extrapolating the things we see today into the near future,” he explained. For example, the show will comment on the pervasiveness of location software, he said. “There will also be issues of technology such as medical ethics, or privacy; or how technology enables the rich but not the poor, who can’t afford it.”

Speaking at Comic-Con July 20 [2013], Shankar told media there, “Joel [J.H. Wyman] was looking for a collaboration with someone who had come from the crime world, and I had worked on ‘CSI’ for eight years.

“This is like coming back to my first love, since for many years I had done science fiction. It’s a great opportunity to get away from dismembered corpses and autopsy scenes.”

There’s plenty of drama — in the new series, the year is 2048, and police officer John Kennex (Karl Urban, “Dr. Bones” from the new “Star Trek” films) is trying to bounce back from one of the most catastrophic attacks ever made against the police department. Kennex wakes up from a 17-month coma and can’t remember much, except that his partner was killed; his girlfriend left him and one of his legs has been amputated and is now outfitted with a high-tech synthetic appendage. According to police department policy, every cop must partner with a robot, so Kennex is paired with Dorian (Ealy), an android with an unusual glitch that makes it have human emotions.

Shankar took an unusual path into television. He started college at age 16 and attended Cornell University, where he earned a B. Sc., an M.S. and a Ph.D. in engineering physics and electrical engineering, and was a member of the elite Kappa Alpha Society, he decided he didn’t want to work as a scientist and moved to Los Angeles to try to become a writer.

Shankar is eager to move in a new direction with “Almost Human,” which he says comes at the right time. “People are so technologically sophisticated now that maybe the audience is ready for a show like this,” he told India-West.

I am particularly intrigued by the ‘man who’s part machine and the machine that’s part human’ concept (something I’ve called machine/flesh in previous postings such as this May 9, 2012 posting titled ‘Everything becomes part machine’) and was looking forward to seeing how they would be integrating this concept along with some of the more recent scientific work being done on prosthetics and robots, given they had an engineer as part of the team (albeit with lots of crime writing experience), into the stories. Sadly, only days after Tserling’s article was published, Shankar parted ways with Almost Human according to the Sept. 10, 2013 posting on the Almost Human blog,

So this was supposed to be the week that I posted a profile of Naren Shankar, for whom I have developed a full-on crush–I mean, he has a PhD in Electrical Engineering from Cornell, he was hired by Gene Roddenberry to be science consultant on TNG, he was saying all sorts of great things about how he wanted to present the future in AH…aaaand he quit as co-showrunner yesterday, citing “creative differences.” That leaves Wyman as sole showrunner, with no plans to replace Shankar.

I’d like to base some of my comments on the previews, unfortunately, Fox Broadcasting,, in its infinite wisdom, has decided to block Canadians from watching Almost Human previews online. (Could someone please explain why? I mean, Canadians will be tuning in to watch or record for future viewing  the series premiere on the 17th & 18th of November 2013 just like our US neighbours, so, why can’t we watch the previews online?)

Getting back to machine/flesh (human with prosthetic)s and life/nonlife (android with feelings), it seems that Almost Human (as did the latest version of Battlestar Galactica, from 2004-2009) may be giving a popular culture voice to some contemporary anxieties being felt about the boundary or lack thereof between humans and machines and life/nonlife. I’ve touched on this topic many times both within and without the popular culture context. Probably one of my more comprehensive essays on machine/flesh is Eye, arm, & leg prostheses, cyborgs, eyeborgs, Deus Ex, and ableism from August 30, 2011, which includes this quote from a still earlier posting on this topic,

Here’s an excerpt from my Feb. 2, 2010 posting which reinforces what Gregor [Gregor Wolbring, University of Calgary] is saying,

This influx of R&D cash, combined with breakthroughs in materials science and processor speed, has had a striking visual and social result: an emblem of hurt and loss has become a paradigm of the sleek, modern, and powerful. Which is why Michael Bailey, a 24-year-old student in Duluth, Georgia, is looking forward to the day when he can amputate the last two fingers on his left hand.

“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.” [originally excerpted from Paul Hochman’s Feb. 1, 2010 article, Bionic Legs, i-Limbs, and Other Super Human Prostheses You’ll Envy for Fast Company]

Here’s something else from the Hochman article,

But Bailey is most surprised by his own reaction. “When I’m wearing it, I do feel different: I feel stronger. As weird as that sounds, having a piece of machinery incorporated into your body, as a part of you, well, it makes you feel above human. [semphasis mine] It’s a very powerful thing.”

Bailey isn’t  almost human’, he’s ‘above human’. As Hochman points out. repeatedly throughout his article, this sentiment is not confined to Bailey. My guess is that Kennex (Karl Urban’s character) in Almost Human doesn’t echo Bailey’s sentiments and, instead feels he’s not quite human while the android, Dorian, (Michael Ealy’s character) struggles with his feelings in a human way that clashes with Kennex’s perspective on what is human and what is not (or what we might be called the boundary between life and nonlife).

Into this mix, one could add the rising anxiety around ‘intelligent’ machines present in real life, as well as, fiction as per this November 12 (?), 2013 article by Ian Barker for Beta News,

The rise of intelligent machines has long been fertile ground for science fiction writers, but a new report by technology research specialists Gartner suggests that the future is closer than we think.

“Smartphones are becoming smarter, and will be smarter than you by 2017,” says Carolina Milanesi, research vice president at Gartner. “If there is heavy traffic, it will wake you up early for a meeting with your boss, or simply send an apology if it is a meeting with your colleague. The smartphone will gather contextual information from its calendar, its sensors, the user’s location and personal data”.

Your smartphone will be able to predict your next move or your next purchase based on what it knows about you. This will be made possible by gathering data using a technique called “cognizant computing”.

Gartner analysts will be discussing the future of smart devices at the Gartner Symposium/ITxpo 2013 in Barcelona from November 10-14 [2013].

The Gartner Symposium/Txpo in Barcelona is ending today (Nov. 14, 2013) but should you be curious about it, you can go here to learn more.

This notion that machines might (or will) get smarter or more powerful than humans (or wizards) is explored by Will.i.am (of the Black Eyed Peas) and, futurist, Brian David Johnson in their upcoming comic book, Wizards and Robots (mentioned in my Oct. 6, 2013 posting),. This notion of machines or technology overtaking human life is also being discussed at the University of Cambridge where there’s talk of founding a Centre for the Study of Existential Risk (from my Nov. 26, 2012 posting)

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction?

Philosophers and scientists at Britain’s Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could “threaten our own existence,” the institution said Sunday.

“In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” Cambridge philosophy professor Huw Price said.

When that happens, “we’re no longer the smartest things around,” he said, and will risk being at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”

Our emerging technologies give rise to questions abut what constitutes life and where human might fit in. For example,

  • are sufficiently advanced machines a new form of life,?
  • what does it mean when human bodies are partially integrated at the neural level with machinery?
  • what happens when machines have feelings?
  • etc.

While this doesn’t exactly fit into my theme of life/nonlife or machine/flesh, this does highlight how some popular culture efforts are attempting to integrate real science into the storytelling. Here’s an excerpt from an interview with Cosima Herter, the science consultant and namesake/model for one of the characters on Orphan Black (from the March 29, 2013 posting on the space.ca blog),

Cosima Herter is Orphan Black’s Science Consultant, and the inspiration for her namesake character in the series. In real-life, Real Cosima is a PhD. student in the History of Science, Technology, and Medicine Program at the University of Minnesota, working on the History and Philosophy of Biology. Hive interns Billi Knight & Peter Rowley spoke with her about her role on the show and the science behind it…

Q: Describe your role in the making of Orphan Black.

A: I’m a resource for the biology, particularly insofar as evolutionary biology is concerned. I study the history and the philosophy of biology, so I do offer some suggestions and some creative ideas, but also help correct some of the misconceptions about science.  I offer different angles and alternatives to look at the way biological science is represented, so (it’s) not reduced to your stereotypical tropes about evolutionary biology and cloning, but also to provide some accuracy for the scripts.

– See more at: http://www.space.ca/article/Orphan-Black-science-consultant#sthash.7P36bbPa.dpuf

For anyone not familiar with the series, from the Wikipedia essay (Note: Links have been removed),

Orphan Black is a Canadian science fiction television series starring Tatiana Maslany as several identical women who are revealed to be clones.