Monthly Archives: December 2017

FrogHeart’s good-bye to 2017 and hello to 2018

This is going to be relatively short and sweet(ish). Starting with the 2017 review:

Nano blogosphere and the Canadian blogosphere

From my perspective there’s been a change taking place in the nano blogosphere over the last few years. There are fewer blogs along with fewer postings from those who still blog. Interestingly, some blogs are becoming more generalized. At the same time, Foresight Institute’s Nanodot blog (as has FrogHeart) has expanded its range of topics to include artificial intelligence and other topics. Andrew Maynard’s 2020 Science blog now exists in an archived from but before its demise, it, too, had started to include other topics, notably risk in its many forms as opposed to risk and nanomaterials. Dexter Johnson’s blog, Nanoclast (on the IEEE [Institute for Electrical and Electronics Engineers] website), maintains its 3x weekly postings. Tim Harper who often wrote about nanotechnology on his Cientifica blog appears to have found a more freewheeling approach that is dominated by his Twitter feed although he also seems (I can’t confirm that the latest posts were written in 2017) to blog here on timharper.net.

The Canadian science blogosphere seems to be getting quieter if Science Borealis (blog aggregator) is a measure. My overall impression is that the bloggers have been a bit quieter this year with fewer postings on the feed or perhaps that’s due to some technical issues (sometimes FrogHeart posts do not get onto the feed). On the promising side, Science Borealis teamed with the Science Writers and Communicators of Canada Association to run a contest, “2017 People’s Choice Awards: Canada’s Favourite Science Online!”  There were two categories (Favourite Science Blog and Favourite Science Site) and you can find a list of the finalists with links to the winners here.

Big congratulations for the winners: Canada’s Favourite Blog 2017: Body of Evidence (Dec. 6, 2017 article by Alina Fisher for Science Borealis) and Let’s Talk Science won Canada’s Favourite Science Online 2017 category as per this announcement.

However, I can’t help wondering: where were ASAP Science, Acapella Science, Quirks & Quarks, IFLS (I f***ing love science), and others on the list for finalists? I would have thought any of these would have a lock on a position as a finalist. These are Canadian online science purveyors and they are hugely popular, which should mean they’d have no problem getting nominated and getting votes. I can’t find the criteria for nominations (or any hint there will be a 2018 contest) so I imagine their nonpresence on the 2017 finalists list will remain a mystery to me.

Looking forward to 2018, I think that the nano blogosphere will continue with its transformation into a more general science/technology-oriented community. To some extent, I believe this reflects the fact that nanotechnology is being absorbed into the larger science/technology effort as foundational (something wiser folks than me predicted some years ago).

As for Science Borealis and the Canadian science online effort, I’m going to interpret the quieter feeds as a sign of a maturing community. After all, there are always ups and downs in terms of enthusiasm and participation and as I noted earlier the launch of an online contest is promising as is the collaboration with Science Writers and Communicators of Canada.

Canadian science policy

It was a big year.

Canada’s Chief Science Advisor

With Canada’s first chief science advisor in many years, being announced Dr. Mona Nemer stepped into her position sometime in Fall 2017. The official announcement was made on Sept. 26, 2017. I covered the event in my Sept. 26, 2017 posting, which includes a few more details than found the official announcement.

You’ll also find in that Sept. 26, 2017 posting a brief discourse on the Naylor report (also known as the Review of Fundamental Science) and some speculation on why, to my knowledge, there has been no action taken as a consequence.  The Naylor report was released April 10, 2017 and was covered here in a three-part review, published on June 8, 2017,

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 2 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

I have found another commentary (much briefer than mine) by Paul Dufour on the Canadian Science Policy Centre website. (November 9, 2017)

Subnational and regional science funding

This began in 2016 with a workshop mentioned in my November 10, 2016 posting: ‘Council of Canadian Academies and science policy for Alberta.” By the time the report was published the endeavour had been transformed into: Science Policy: Considerations for Subnational Governments (report here and my June 22, 2017 commentary here).

I don’t know what will come of this but I imagine scientists will be supportive as it means more money and they are always looking for more money. Still, the new government in British Columbia has only one ‘science entity’ and I’m not sure it’s still operational but i was called the Premier’s Technology Council. To my knowledge, there is no ministry or other agency that is focused primarily or partially on science.

Meanwhile, a couple of representatives from the health sciences (neither of whom were involved in the production of the report) seem quite enthused about the prospects for provincial money in their (Bev Holmes, Interim CEO, Michael Smith Foundation for Health Research, British Columbia, and Patrick Odnokon (CEO, Saskatchewan Health Research Foundation) October 27, 2017 opinion piece for the Canadian Science Policy Centre.

Artificial intelligence and Canadians

An event which I find more interesting with time was the announcement of the Pan=Canadian Artificial Intelligence Strategy in the 2017 Canadian federal budget. Since then there has been a veritable gold rush mentality with regard to artificial intelligence in Canada. One announcement after the next about various corporations opening new offices in Toronto or Montréal has been made in the months since.

What has really piqued my interest recently is a report being written for Canada’s Treasury Board by Michael Karlin (you can learn more from his Twitter feed although you may need to scroll down past some of his more personal tweets (something cassoulet in the Dec. 29, 2017 tweets).  As for Karlin’s report, which is a work in progress, you can find out more about the report and Karlin in a December 12, 2017 article by Rob Hunt for the Algorithmic Media Observatory (sponsored by the Social Sciences and Humanities Research Council of Canada [SHRCC], the Centre for Study of Democratic Citizenship, and the Fonds de recherche du Québec: Société et culture).

You can ring in 2018 by reading and making comments, which could influence the final version, on Karlin’s “Responsible Artificial Intelligence in the Government of Canada” part of the government’s Digital Disruption White Paper Series.

As for other 2018 news, the Council of Canadian Academies is expected to publish “The State of Science and Technology and Industrial Research and Development in Canada” at some point soon (we hope). This report follows and incorporates two previous ‘states’, The State of Science and Technology in Canada, 2012 (the first of these was a 2006 report) and the 2013 version of The State of Industrial R&D in Canada. There is already some preliminary data for this latest ‘state of’  (you can find a link and commentary in my December 15, 2016 posting).

FrogHeart then (2017) and soon (2018)

On looking back I see that the year started out at quite a clip as I was attempting to hit the 5000th blog posting mark, which I did on March 3,  2017. I have cut back somewhat from the 3 postings/day high to approximately 1 posting/day. It makes things more manageable allowing me to focus on other matters.

By the way, you may note that the ‘Donate’ button has disappeared from my sidebard. I thank everyone who donated from the bottom of my heart. The money was more than currency, it also symbolized encouragement. On the sad side, I moved from one hosting service to a new one (Sibername) late in December 2016 and have been experiencing serious bandwidth issues which result on FrogHeart’s disappearance from the web for days at a time. I am trying to resolve the issues and hope that such actions as removing the ‘Donate’ button will help.

I wish my readers all the best for 2018 as we explore nanotechnology and other emerging technologies!

(I apologize for any and all errors. I usually take a little more time to write this end-of-year and coming-year piece but due to bandwidth issues I was unable to access my draft and give it at least one review. And at this point, I’m too tired to try spotting error. If you see any, please do let me know.)

A customized cruise experience with wearable technology (and decreased personal agency?)

The days when you went cruising to ‘get away from it all’ seem to have passed (if they ever really existed) with the introduction of wearable technology that will register your every preference and make life easier according to Cliff Kuang’s Oct. 19, 2017 article for Fast Company,

This month [October 2017], the 141,000-ton Regal Princess will push out to sea after a nine-figure revamp of mind-boggling scale. Passengers won’t be greeted by new restaurants, swimming pools, or onboard activities, but will instead step into a future augured by the likes of Netflix and Uber, where nearly everything is on demand and personally tailored. An ambitious new customization platform has been woven into the ship’s 19 passenger decks: some 7,000 onboard sensors and 4,000 “guest portals” (door-access panels and touch-screen TVs), all of them connected by 75 miles of internal cabling. As the Carnival-owned ship cruises to Nassau, Bahamas, and Grand Turk, its 3,500 passengers will have the option of carrying a quarter-size device, called the Ocean Medallion, which can be slipped into a pocket or worn on the wrist and is synced with a companion app.

The platform will provide a new level of service for passengers; the onboard sensors record their tastes and respond to their movements, and the app guides them around the ship and toward activities aligned with their preferences. Carnival plans to roll out the platform to another seven ships by January 2019. Eventually, the Ocean Medallion could be opening doors, ordering drinks, and scheduling activities for passengers on all 102 of Carnival’s vessels across 10 cruise lines, from the mass-market Princess ships to the legendary ocean liners of Cunard.

Kuang goes on to explain the reasoning behind this innovation,

The Ocean Medallion is Carnival’s attempt to address a problem that’s become increasingly vexing to the $35.5 billion cruise industry. Driven by economics, ships have exploded in size: In 1996, Carnival Destiny was the world’s largest cruise ship, carrying 2,600 passengers. Today, Royal Caribbean’s MS Harmony of the Seas carries up to 6,780 passengers and 2,300 crew. Larger ships expend less fuel per passenger; the money saved can then go to adding more amenities—which, in turn, are geared to attracting as many types of people as possible. Today on a typical ship you can do practically anything—from attending violin concertos to bungee jumping. And that’s just onboard. Most of a cruise is spent in port, where each day there are dozens of experiences available. This avalanche of choice can bury a passenger. It has also made personalized service harder to deliver. …

Kuang also wrote this brief description of how the technology works from the passenger’s perspective in an Oct. 19, 2017 item for Fast Company,

1. Pre-trip

On the web or on the app, you can book experiences, log your tastes and interests, and line up your days. That data powers the recommendations you’ll see. The Ocean Medallion arrives by mail and becomes the key to ship access.

2. Stateroom

When you draw near, your cabin-room door unlocks without swiping. The room’s unique 43-inch TV, which doubles as a touch screen, offers a range of Carnival’s bespoke travel shows. Whatever you watch is fed into your excursion suggestions.

3. Food

When you order something, sensors detect where you are, allowing your server to find you. Your allergies and preferences are also tracked, and shape the choices you’re offered. In all, the back-end data has 45,000 allergens tagged and manages 250,000 drink combinations.

4. Activities

The right algorithms can go beyond suggesting wines based on previous orders. Carnival is creating a massive semantic database, so if you like pricey reds, you’re more apt to be guided to a violin concerto than a limbo competition. Your onboard choices—the casino, the gym, the pool—inform your excursion recommendations.

In Kuang’s Oct. 19, 2017 article he notes that the cruise ship line is putting a lot of effort into retraining their staff and emphasizing the ‘soft’ skills that aren’t going to be found in this iteration of the technology. No mention is made of whether or not there will be reductions in the number of staff members on this cruise ship nor is the possibility that ‘soft’ skills may in the future be incorporated into this technological marvel.

Personalization/customization is increasingly everywhere

How do you feel about customized news feeds? As it turns out, this is not a rhetorical question as Adrienne LaFrance notes in her Oct. 19, 2017 article for The Atlantic (Note: Links have been removed),

Today, a Google search for news runs through the same algorithmic filtration system as any other Google search: A person’s individual search history, geographic location, and other demographic information affects what Google shows you. Exactly how your search results differ from any other person’s is a mystery, however. Not even the computer scientists who developed the algorithm could precisely reverse engineer it, given the fact that the same result can be achieved through numerous paths, and that ranking factors—deciding which results show up first—are constantly changing, as are the algorithms themselves.

We now get our news in real time, on demand, tailored to our interests, across multiple platforms, without knowing just how much is actually personalized. It was technology companies like Google and Facebook, not traditional newsrooms, that made it so. But news organizations are increasingly betting that offering personalized content can help them draw audiences to their sites—and keep them coming back.

Personalization extends beyond how and where news organizations meet their readers. Already, smartphone users can subscribe to push notifications for the specific coverage areas that interest them. On Facebook, users can decide—to some extent—which organizations’ stories they would like to appear in their news feeds. At the same time, devices and platforms that use machine learning to get to know their users will increasingly play a role in shaping ultra-personalized news products. Meanwhile, voice-activated artificially intelligent devices, such as Google Home and Amazon Echo, are poised to redefine the relationship between news consumers and the news [emphasis mine].

While news personalization can help people manage information overload by making individuals’ news diets unique, it also threatens to incite filter bubbles and, in turn, bias [emphasis mine]. This “creates a bit of an echo chamber,” says Judith Donath, author of The Social Machine: Designs for Living Online and a researcher affiliated with Harvard University ’s Berkman Klein Center for Internet and Society. “You get news that is designed to be palatable to you. It feeds into people’s appetite of expecting the news to be entertaining … [and] the desire to have news that’s reinforcing your beliefs, as opposed to teaching you about what’s happening in the world and helping you predict the future better.”

Still, algorithms have a place in responsible journalism. “An algorithm actually is the modern editorial tool,” says Tamar Charney, the managing editor of NPR One, the organization’s customizable mobile-listening app. A handcrafted hub for audio content from both local and national programs as well as podcasts from sources other than NPR, NPR One employs an algorithm to help populate users’ streams with content that is likely to interest them. But Charney assures there’s still a human hand involved: “The whole editorial vision of NPR One was to take the best of what humans do and take the best of what algorithms do and marry them together.” [emphasis mine]

The skimming and diving Charney describes sounds almost exactly like how Apple and Google approach their distributed-content platforms. With Apple News, users can decide which outlets and topics they are most interested in seeing, with Siri offering suggestions as the algorithm gets better at understanding your preferences. Siri now has have help from Safari. The personal assistant can now detect browser history and suggest news items based on what someone’s been looking at—for example, if someone is searching Safari for Reykjavík-related travel information, they will then see Iceland-related news on Apple News. But the For You view of Apple News isn’t 100 percent customizable, as it still spotlights top stories of the day, and trending stories that are popular with other users, alongside those curated just for you.

Similarly, with Google’s latest update to Google News, readers can scan fixed headlines, customize sidebars on the page to their core interests and location—and, of course, search. The latest redesign of Google News makes it look newsier than ever, and adds to many of the personalization features Google first introduced in 2010. There’s also a place where you can preprogram your own interests into the algorithm.

Google says this isn’t an attempt to supplant news organizations, nor is it inspired by them. The design is rather an embodiment of Google’s original ethos, the product manager for Google News Anand Paka says: “Just due to the deluge of information, users do want ways to control information overload. In other words, why should I read the news that I don’t care about?” [emphasis mine]

Meanwhile, in May [2017?], Google briefly tested a personalized search filter that would dip into its trove of data about users with personal Google and Gmail accounts and include results exclusively from their emails, photos, calendar items, and other personal data related to their query. [emphasis mine] The “personal” tab was supposedly “just an experiment,” a Google spokesperson said, and the option was temporarily removed, but seems to have rolled back out for many users as of August [2017?].

Now, Google, in seeking to settle a class-action lawsuit alleging that scanning emails to offer targeted ads amounts to illegal wiretapping, is promising that for the next three years it won’t use the content of its users’ emails to serve up targeted ads in Gmail. The move, which will go into effect at an unspecified date, doesn’t mean users won’t see ads, however. Google will continue to collect data from users’ search histories, YouTube, and Chrome browsing habits, and other activity.

The fear that personalization will encourage filter bubbles by narrowing the selection of stories is a valid one, especially considering that the average internet user or news consumer might not even be aware of such efforts. Elia Powers, an assistant professor of journalism and news media at Towson University in Maryland, studied the awareness of news personalization among students after he noticed those in his own classes didn’t seem to realize the extent to which Facebook and Google customized users’ results. “My sense is that they didn’t really understand … the role that people that were curating the algorithms [had], how influential that was. And they also didn’t understand that they could play a pretty active role on Facebook in telling Facebook what kinds of news they want them to show and how to prioritize [content] on Google,” he says.

The results of Powers’s study, which was published in Digital Journalism in February [2017], showed that the majority of students had no idea that algorithms were filtering the news content they saw on Facebook and Google. When asked if Facebook shows every news item, posted by organizations or people, in a users’ newsfeed, only 24 percent of those surveyed were aware that Facebook prioritizes certain posts and hides others. Similarly, only a quarter of respondents said Google search results would be different for two different people entering the same search terms at the same time. [emphasis mine; Note: Respondents in this study were students.]

This, of course, has implications beyond the classroom, says Powers: “People as news consumers need to be aware of what decisions are being made [for them], before they even open their news sites, by algorithms and the people behind them, and also be able to understand how they can counter the effects or maybe even turn off personalization or make tweaks to their feeds or their news sites so they take a more active role in actually seeing what they want to see in their feeds.”

On Google and Facebook, the algorithm that determines what you see is invisible. With voice-activated assistants, the algorithm suddenly has a persona. “We are being trained to have a relationship with the AI,” says Amy Webb, founder of the Future Today Institute and an adjunct professor at New York University Stern School of Business. “This is so much more catastrophically horrible for news organizations than the internet. At least with the internet, I have options. The voice ecosystem is not built that way. It’s being built so I just get the information I need in a pleasing way.”

LaFrance’s article is thoughtful and well worth reading in its entirety. Now, onto some commentary.

Loss of personal agency

I have been concerned for some time about the increasingly dull results I get from a Google search and while I realize the company has been gathering information about me via my searches , supposedly in service of giving me better searches, I had no idea how deeply the company can mine for personal data. It makes me wonder what would happen if Google and Facebook attempted a merger.

More cogently, I rather resent the search engines and artificial intelligence agents (e.g. Facebook bots) which have usurped my role as the arbiter of what interests me, in short, my increasing loss of personal agency.

I’m also deeply suspicious of what these companies are going to do with my data. Will it be used to manipulate me in some way? Presumably, the data will be sold and used for some purpose. In the US, they have married electoral data with consumer data as Brent Bambury notes in an Oct. 13, 2017 article for his CBC (Canadian Broadcasting Corporation) Radio show,

How much of your personal information circulates in the free-market ether of metadata? It could be more than you imagine, and it might be enough to let others change the way you vote.

A data firm that specializes in creating psychological profiles of voters claims to have up to 5,000 data points on 220 million Americans. Cambridge Analytica has deep ties to the American right and was hired by the campaigns of Ben Carson, Ted Cruz and Donald Trump.

During the U.S. election, CNN called them “Donald Trump’s mind readers” and his secret weapon.

David Carroll is a Professor at the Parsons School of Design in New York City. He is one of the millions of Americans profiled by Cambridge Analytica and he’s taking legal action to find out where the company gets its masses of data and how they use it to create their vaunted psychographic profiles of voters.

On Day 6 [Banbury’s CBC radio programme], he explained why that’s important.

“They claim to have figured out how to project our voting behavior based on our consumer behavior. So it’s important for citizens to be able to understand this because it would affect our ability to understand how we’re being targeted by campaigns and how the messages that we’re seeing on Facebook and television are being directed at us to manipulate us.” [emphasis mine]

The parent company of Cambridge Analytica, SCL Group, is a U.K.-based data operation with global ties to military and political activities. David Carroll says the potential for sharing personal data internationally is a cause for concern.

“It’s the first time that this kind of data is being collected and transferred across geographic boundaries,” he says.

But that also gives Carroll an opening for legal action. An individual has more rights to access their personal information in the U.K., so that’s where he’s launching his lawsuit.

Reports link Michael Flynn, briefly Trump’s National Security Adviser, to SCL Group and indicate that former White House strategist Steve Bannon is a board member of Cambridge Analytica. Billionaire Robert Mercer, who has underwritten Bannon’s Breitbart operations and is a major Trump donor, also has a significant stake in Cambridge Analytica.

In the world of data, Mercer’s credentials are impeccable.

“He is an important contributor to the field of artificial intelligence,” says David Carroll.

“His work at IBM is seminal and really important in terms of the foundational ideas that go into big data analytics, so the relationship between AI and big data analytics. …

Banbury’s piece offers a lot more, including embedded videos, than I’ve not included in that excerpt but I also wanted to include some material from Carole Cadwalladr’s Oct. 1, 2017 Guardian article about Carroll and his legal fight in the UK,

“There are so many disturbing aspects to this. One of the things that really troubles me is how the company can buy anonymous data completely legally from all these different sources, but as soon as it attaches it to voter files, you are re-identified. It means that every privacy policy we have ignored in our use of technology is a broken promise. It would be one thing if this information stayed in the US, if it was an American company and it only did voter data stuff.”

But, he [Carroll] argues, “it’s not just a US company and it’s not just a civilian company”. Instead, he says, it has ties with the military through SCL – “and it doesn’t just do voter targeting”. Carroll has provided information to the Senate intelligence committee and believes that the disclosures mandated by a British court could provide evidence helpful to investigators.

Frank Pasquale, a law professor at the University of Maryland, author of The Black Box Society and a leading expert on big data and the law, called the case a “watershed moment”.

“It really is a David and Goliath fight and I think it will be the model for other citizens’ actions against other big corporations. I think we will look back and see it as a really significant case in terms of the future of algorithmic accountability and data protection. …

Nobody is discussing personal agency directly but if you’re only being exposed to certain kinds of messages then your personal agency has been taken from you. Admittedly we don’t have complete personal agency in our lives but AI along with the data gathering done online and increasingly with wearable and smart technology means that another layer of control has been added to your life and it is largely invisible. After all, the students in Elia Powers’ study didn’t realize their news feeds were being pre-curated.

Nano- and neuro- together for nanoneuroscience

This is not the first time I’ve posted about nanotechnology and neuroscience (see this April 2, 2013 piece about then new brain science initiative in the US and Michael Berger’s  Nanowerk Spotlight article/review of an earlier paper covering the topic of nanotechnology and neuroscience).

Interestingly, the European Union (EU) had announced its two  $1B Euro research initiatives, the Human Brain Project and the Graphene Flagship (see my Jan. 28, 2013 posting about it),  months prior to the US brain research push. For those unfamiliar with the nanotechnology effort, graphene is a nanomaterial and there is high interest in its potential use in biomedical technology, thus partially connecting both EU projects.

In any event, Berger is highlighting a nanotechnology and neuroscience connection again in his Oct. 18, 2017 Nanowerk Spotlight article, or overview of, a new paper, which updates our understanding of the potential connections between the two fields (Note: A link has been removed),

Over the past several years, nanoscale analysis tools and in the design and synthesis of nanomaterials have generated optical, electrical, and chemical methods that can readily be adapted for use in neuroscience and brain activity mapping.

A review paper in Advanced Functional Materials (“Nanotechnology for Neuroscience: Promising Approaches for Diagnostics, Therapeutics and Brain Activity Mapping”) summarizes the basic concepts associated with neuroscience and the current journey of nanotechnology towards the study of neuron function by addressing various concerns on the significant role of nanomaterials in neuroscience and by describing the future applications of this emerging technology.

The collaboration between nanotechnology and neuroscience, though still at the early stages, utilizes broad concepts, such as drug delivery, cell protection, cell regeneration and differentiation, imaging and surgery, to give birth to novel clinical methods in neuroscience.

Ultimately, the clinical translation of nanoneuroscience implicates that central nervous system (CNS) diseases, including neurodevelopmental, neurodegenerative and psychiatric diseases, have the potential to be cured, while the industrial translation of nanoneuroscience indicates the need for advancement of brain-computer interface technologies.

Future Developing Arenas in Nanoneuroscience

The Brain Activity Map (BAM) Project aims to map the neural activity of every neuron across all neural circuits with the ultimate aim of curing diseases associated with the nervous system. The announcement of this collaborative, public-private research initiative in 2013 by President Obama has driven the surge in developing methods to elucidate neural circuitry. Three current developing arenas in the context of nanoneuroscience applications that will push such initiative forward are 1) optogenetics, 2) molecular/ion sensing and monitoring and 3) piezoelectric effects.

In their review, the authors discuss these aspects in detail.

Neurotoxicity of Nanomaterials

By engineering particles on the scale of molecular-level entities – proteins, lipid bilayers and nucleic acids – we can stereotactically interface with many of the components of cell systems, and at the cutting edge of this technology, we can begin to devise ways in which we can manipulate these components to our own ends. However, interfering with the internal environment of cells, especially neurons, is by no means simple.

“If we are to continue to make great strides in nanoneuroscience, functional investigations of nanomaterials must be complemented with robust toxicology studies,” the authors point out. “A database on the toxicity of materials that fully incorporates these findings for use in future schema must be developed. These databases should include information and data on 1) the chemical nature of the nanomaterials in complex aqueous environments; 2) the biological interactions of nanomaterials with chemical specificity; 3) the effects of various nanomaterial properties on living systems; and 4) a model for the simulation and computation of possible effects of nanomaterials in living systems across varying time and space. If we can establish such methods, it may be possible to design nanopharmaceuticals for improved research as well as quality of life.”

“However, challenges in nanoneuroscience are present in many forms, such as neurotoxicity; the inability to cross the blood-brain barrier [emphasis mine]; the need for greater specificity, bioavailability and short half-lives; and monitoring of disease treatment,” the authors conclude their review. “The nanoneurotoxicity surrounding these nanomaterials is a barrier that must be overcome for the translation of these applications from bench-to-bedside. While the challenges associated with nanoneuroscience seem unending, they represent opportunities for future work.”

I have a March 26, 2015 posting about Canadian researchers breaching the blood-brain barrier and an April 13, 2016 posting about US researchers at Cornell University also breaching the blood-brain barrier. Perhaps the “inability” mentioned in this Spotlight article means that it can’t be done consistently or that it hasn’t been achieved on humans.

Here’s a link to and a citation for the paper,

Nanotechnology for Neuroscience: Promising Approaches for Diagnostics, Therapeutics and Brain Activity Mapping by Anil Kumar, Aaron Tan, Joanna Wong, Jonathan Clayton Spagnoli, James Lam, Brianna Diane Blevins, Natasha G, Lewis Thorne, Keyoumars Ashkan, Jin Xie, and Hong Liu. Advanced Functional Materials Volume 27, Issue 39, October 19, 2017 DOI: 10.1002/adfm.201700489 Version of Record online: 14 AUG 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

I took a look at the authors’ information and found that most of these researchers are based in  China and in the UK, with a sole researcher based in the US.

(Merry Christmas!) Japanese tree frogs inspire hardware for the highest of tech: a swarmalator

First, the frog,

[Japanese Tree Frog] By 池田正樹 (talk)masaki ikeda – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=4593224

I wish they had a recording of the mating calls for Japanese tree frogs since they were the inspiration for mathematicians at Cornell University (New York state, US) according to a November 17, 2017 news item on ScienceDaily,

How does the Japanese tree frog figure into the latest work of noted mathematician Steven Strogatz? As it turns out, quite prominently.

“We had read about these funny frogs that hop around and croak,” said Strogatz, the Jacob Gould Schurman Professor of Applied Mathematics. “They form patterns in space and time. Usually it’s about reproduction. And based on how the other guy or guys are croaking, they don’t want to be around another one that’s croaking at the same time as they are, because they’ll jam each other.”

A November 15, 2017 Cornell University news release (also on EurekAlert but dated November 17, 2017) by Tom Fleischman, which originated the news item, details how the calls led to ‘swarmalators’ (Note: Links have been removed),

Strogatz and Kevin O’Keeffe, Ph.D. ’17, used the curious mating ritual of male Japanese tree frogs as inspiration for their exploration of “swarmalators” – their term for systems in which both synchronization and swarming occur together.

Specifically, they considered oscillators whose phase dynamics and spatial dynamics are coupled. In the instance of the male tree frogs, they attempt to croak in exact anti-phase (one croaks while the other is silent) while moving away from a rival so as to be heard by females.

This opens up “a new class of math problems,” said Strogatz, a Stephen H. Weiss Presidential Fellow. “The question is, what do we expect to see when people start building systems like this or observing them in biology?”

Their paper, “Oscillators That Sync and Swarm,” was published Nov. 13 [2017] in Nature Communications. Strogatz and O’Keeffe – now a postdoctoral researcher with the Senseable City Lab at the Massachusetts Institute of Technology – collaborated with Hyunsuk Hong from Chonbuk National University in Jeonju, South Korea.

Swarming and synchronization both involve large, self-organizing groups of individuals interacting according to simple rules, but rarely have they been studied together, O’Keeffe said.

“No one had connected these two areas, in spite of the fact that there were all these parallels,” he said. “That was the theoretical idea that sort of seduced us, I suppose. And there were also a couple of concrete examples, which we liked – including the tree frogs.”

Studies of swarms focus on how animals move – think of birds flocking or fish schooling – while neglecting the dynamics of their internal states. Studies of synchronization do the opposite: They focus on oscillators’ internal dynamics. Strogatz long has been fascinated by fireflies’ synchrony and other similar phenomena, giving a TED Talk on the topic in 2004, but not on their motion.

“[Swarming and synchronization] are so similar, and yet they were never connected together, and it seems so obvious,” O’Keeffe said. “It’s a whole new landscape of possible behaviors that hadn’t been explored before.”

Using a pair of governing equations that assume swarmalators are free to move about, along with numerical simulations, the group found that a swarmalator system settles into one of five states:

  • Static synchrony – featuring circular symmetry, crystal-like distribution, fully synchronized in phase;
  • Static asynchrony – featuring uniform distribution, meaning that every phase occurs everywhere;
  • Static phase wave – swarmalators settle near others in a phase similar to their own, and phases are frozen at their initial values;
  • Splintered phase wave – nonstationary, disconnected clusters of distinct phases; and
  • Active phase wave – similar to bidirectional states found in biological swarms, where populations split into counter-rotating subgroups; also similar to vortex arrays formed by groups of sperm.

Through the study of simple models, the group found that the coupling of “sync” and “swarm” leads to rich patterns in both time and space, and could lead to further study of systems that exhibit this dual behavior.

“This opens up a lot of questions for many parts of science – there are a lot of things to try that people hadn’t thought of trying,” Strogatz said. “It’s science that opens doors for science. It’s inaugurating science, rather than culminating science.”

Here’s a link to and a citation for the paper,

Oscillators that sync and swarm by Kevin P. O’Keeffe, Hyunsuk Hong, & Steven H. Strogatz. Nature Communications 8, Article number: 1504 (2017) doi:10.1038/s41467-017-01190-3 Published online: 15 November 2017

This paper is open access.

One last thing, these frogs have also inspired WiFi improvements (from the Japanese tree frog Wikipedia entry; Note: Links have been removed),

Journalist Toyohiro Akiyama carried some Japanese tree frogs with him during his trip to the Mir space station in December 1990.[citation needed] Calling behavior of the species was used to create an algorithm for optimizing Wi-Fi networks.[3]

While it’s not clear in the Wikipedia entry, the frogs were part of an experiment. Here’s a link to and a citation for the paper about the experiment, along with an abstract,

The Frog in Space (FRIS) experiment onboard Space Station Mir: final report and follow-on studies by Yamashita, M.; Izumi-Kurotani, A.; Mogami, Y.; Okuno,k M.; Naitoh, T.; Wassersug, R. J. Biol Sci Space. 1997 Dec 11(4):313-20.

Abstract

The “Frog in Space” (FRIS) experiment marked a major step for Japanese space life science, on the occasion of the first space flight of a Japanese cosmonaut. At the core of FRIS were six Japanese tree frogs, Hyla japonica, flown on Space Station Mir for 8 days in 1990. The behavior of these frogs was observed and recorded under microgravity. The frogs took up a “parachuting” posture when drifting in a free volume on Mir. When perched on surfaces, they typically sat with their heads bent backward. Such a peculiar posture, after long exposure to microgravity, is discussed in light of motion sickness in amphibians. Histological examinations and other studies were made on the specimens upon recovery. Some organs, such as the liver and the vertebra, showed changes as a result of space flight; others were unaffected. Studies that followed FRIS have been conducted to prepare for a second FRIS on the International Space Station. Interspecific diversity in the behavioral reactions of anurans to changes in acceleration is the major focus of these investigations. The ultimate goal of this research is to better understand how organisms have adapted to gravity through their evolution on earth.

The paper is open access.

NanoFARM: food, agriculture, and nanoparticles

The research focus for the NanoFARM consortium is on pesticides according to an October 19, 2017 news item on Nanowerk,

The answer to the growing, worldwide food production problem may have a tiny solution—nanoparticles, which are being explored as both fertilizers and fungicides for crops.

NanoFARM – research consortium formed between Carnegie Mellon University [US], the University of Kentucky [US], the University of Vienna [Austria], and Aveiro University in Prague [Czech Republic] – is studying the effects of nanoparticles on agriculture. The four universities received grants from their countries’ respective National Science Foundations to discover how these tiny particles – some just 4 nanometers in diameter – can revolutionize how farmers grow their food.

An October ??, 2017 Carnegie Mellon University news release by Adam Dove, which originated the news item, fills in a few more details,

“What we’re doing is getting a fundamental understanding of nanoparticle-to-plant interactions to enable future applications,” says Civil and Environmental Engineering (CEE) Professor Greg Lowry, the principal investigator for the nanoFARM project. “With pesticides, less than 5% goes into the crop—the rest just goes into the environment and does harmful things. What we’re trying to do is minimize that waste and corresponding environmental damage by doing a better job of targeting the delivery.”

The teams are looking at related questions: How much nanomaterial is needed to help crops when it comes to driving away pests and delivering nutrients, and how much could potentially hurt plants or surrounding ecosystems?

Applied pesticides and fertilizers are vulnerable to washing away—especially if there’s a rainstorm soon after application. But nanoparticles are not so easily washed off, making them extremely efficient for delivering micronutrients like zinc or copper to crops.

“If you put in zinc oxide nanoparticles instead, it might take days or weeks to dissolve, providing a slow, long-term delivery system.”

Gao researches the rate at which nanoparticles dissolve. His most recent finding is that nanoparticles of copper oxide take up to 20-30 days to dissolve in soil, meaning that they can deliver nutrients to plants at a steady rate over that time period.

“In many developing countries, a huge number of people are starving,” says Gao. “This kind of technology can help provide food and save energy.”

But Gao’s research is only one piece of the NanoFARM puzzle. Lowry recently traveled to Australia with Ph.D. student Eleanor Spielman-Sun to explore how differently charged nanoparticles were absorbed into wheat plants.

They learned that negatively charged particles were able to move into the veins of a plant—making them a good fit for a farmer who wanted to apply a fungicide. Neutrally charged particles went into the tissue of the leaves, which would be beneficial for growers who wanted to fortify a food with nutritional value.

Lowry said they are still a long way from signing off on a finished product for all crops—right now they are concentrating on tomato and wheat plants. But with the help of their university partners, they are slowly creating new nano-enabled agrochemicals for more efficient and environmentally friendly agriculture.

For more information, you can find the NanoFARM website here.

Machine learning software and quantum computers that think

A Sept. 14, 2017 news item on phys.org sets the stage for quantum machine learning by explaining a few basics first,

Language acquisition in young children is apparently connected with their ability to detect patterns. In their learning process, they search for patterns in the data set that help them identify and optimize grammar structures in order to properly acquire the language. Likewise, online translators use algorithms through machine learning techniques to optimize their translation engines to produce well-rounded and understandable outcomes. Even though many translations did not make much sense at all at the beginning, in these past years we have been able to see major improvements thanks to machine learning.

Machine learning techniques use mathematical algorithms and tools to search for patterns in data. These techniques have become powerful tools for many different applications, which can range from biomedical uses such as in cancer reconnaissance, in genetics and genomics, in autism monitoring and diagnosis and even plastic surgery, to pure applied physics, for studying the nature of materials, matter or even complex quantum systems.

Capable of adapting and changing when exposed to a new set of data, machine learning can identify patterns, often outperforming humans in accuracy. Although machine learning is a powerful tool, certain application domains remain out of reach due to complexity or other aspects that rule out the use of the predictions that learning algorithms provide.

Thus, in recent years, quantum machine learning has become a matter of interest because of is vast potential as a possible solution to these unresolvable challenges and quantum computers show to be the right tool for its solution.

A Sept. 14, 2017 Institute of Photonic Sciences ([Catalan] Institut de Ciències Fotòniques] ICFO) press release, which originated the news item, goes on to detail a recently published overview of the state of quantum machine learning,

In a recent study, published in Nature, an international team of researchers integrated by Jacob Biamonte from Skoltech/IQC, Peter Wittek from ICFO, Nicola Pancotti from MPQ, Patrick Rebentrost from MIT, Nathan Wiebe from Microsoft Research, and Seth Lloyd from MIT have reviewed the actual status of classical machine learning and quantum machine learning. In their review, they have thoroughly addressed different scenarios dealing with classical and quantum machine learning. In their study, they have considered different possible combinations: the conventional method of using classical machine learning to analyse classical data, using quantum machine learning to analyse both classical and quantum data, and finally, using classical machine learning to analyse quantum data.

Firstly, they set out to give an in-depth view of the status of current supervised and unsupervised learning protocols in classical machine learning by stating all applied methods. They introduce quantum machine learning and provide an extensive approach on how this technique could be used to analyse both classical and quantum data, emphasizing that quantum machines could accelerate processing timescales thanks to the use of quantum annealers and universal quantum computers. Quantum annealing technology has better scalability, but more limited use cases. For instance, the latest iteration of D-Wave’s [emphasis mine] superconducting chip integrates two thousand qubits, and it is used for solving certain hard optimization problems and for efficient sampling. On the other hand, universal (also called gate-based) quantum computers are harder to scale up, but they are able to perform arbitrary unitary operations on qubits by sequences of quantum logic gates. This resembles how digital computers can perform arbitrary logical operations on classical bits.

However, they address the fact that controlling a quantum system is very complex and analyzing classical data with quantum resources is not as straightforward as one may think, mainly due to the challenge of building quantum interface devices that allow classical information to be encoded into a quantum mechanical form. Difficulties, such as the “input” or “output” problems appear to be the major technical challenge that needs to be overcome.

The ultimate goal is to find the most optimized method that is able to read, comprehend and obtain the best outcomes of a data set, be it classical or quantum. Quantum machine learning is definitely aimed at revolutionizing the field of computer sciences, not only because it will be able to control quantum computers, speed up the information processing rates far beyond current classical velocities, but also because it is capable of carrying out innovative functions, such quantum deep learning, that could not only recognize counter-intuitive patterns in data, invisible to both classical machine learning and to the human eye, but also reproduce them.

As Peter Wittek [emphasis mine] finally states, “Writing this paper was quite a challenge: we had a committee of six co-authors with different ideas about what the field is, where it is now, and where it is going. We rewrote the paper from scratch three times. The final version could not have been completed without the dedication of our editor, to whom we are indebted.”

It was a bit of a surprise to see local (Vancouver, Canada) company D-Wave Systems mentioned but i notice that one of the paper’s authors (Peter Wittek) is mentioned in a May 22, 2017 D-Wave news release announcing a new partnership to foster quantum machine learning,

Today [May 22, 2017] D-Wave Systems Inc., the leader in quantum computing systems and software, announced a new initiative with the Creative Destruction Lab (CDL) at the University of Toronto’s Rotman School of Management. D-Wave will work with CDL, as a CDL Partner, to create a new track to foster startups focused on quantum machine learning. The new track will complement CDL’s successful existing track in machine learning. Applicants selected for the intensive one-year program will go through an introductory boot camp led by Dr. Peter Wittek [emphasis mine], author of Quantum Machine Learning: What Quantum Computing means to Data Mining, with instruction and technical support from D-Wave experts, access to a D-Wave 2000Q™ quantum computer, and the opportunity to use a D-Wave sampling service to enable machine learning computations and applications. D-Wave staff will be a part of the committee selecting up to 40 individuals for the program, which begins in September 2017.

For anyone interested in the paper, here’s a link to and a citation,

Quantum machine learning by Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, & Seth Lloyd. Nature 549, 195–202 (14 September 2017) doi:10.1038/nature23474 Published online 13 September 2017

This paper is behind a paywall.

Nanopatch more effective with poliovirus

No more needles or syringes that’s the Nanopatch promise and its one I’ve been writing about since 2009. It seems 2017 marks another step closer to seeing this idea become a product. From an Oct. 5, 2017 news item on ScienceDaily,

Efforts to rid the world of polio have taken another significant step, thanks to research led by University of Queensland [UQ] bioscience experts and funding from the World Health Organisation (WHO).

A fresh study of the Nanopatch — a microscopic vaccine delivery platform first developed by UQ researchers — has shown the device more effectively combats poliovirus than needles and syringes.

Here’s a prototype,

Caption: This is an image of the intended commercial product. Credit: Courtesy Vaxxas Pty Ltd

An Oct. 5, 2017 University of Queensland press release (also on EurekAlert), which originated the news item, provides more detail (Note: Links have been removed),

Head of UQ’s School of Chemistry and Molecular Biosciences Professor Paul Young said the breakthrough provided the next step in consigning polio to history.

“Polio was one of the most dreaded childhood diseases of the 20th century, resulting in limb disfigurement and irreversible paralysis in tens of millions of cases,” Professor Young said.

“This most recent study showed the Nanopatch enhanced responses to all three types of inactivated poliovirus vaccines (IPV) – a necessary advancement from using the current live oral vaccine.

“We are extremely grateful to the WHO for providing funding to Vaxxas Pty Ltd, the biotechnology company commercialising the Nanopatch.

“The support specifically assists pre-clinical studies and good manufacturing practices.”

Patch inventor Professor Mark Kendall said the study exhibited a key advantage of the Nanopatch.

“It targets the abundant immune cell populations in the skin’s outer layers, rather than muscle, resulting in a more efficient vaccine delivery system,” Professor Kendall said.

“The ease of administration, coupled with dose reduction observed in this study suggests that the Nanopatch could facilitate inexpensive vaccination of inactivated poliovirus vaccines.”

UQ Australian Institute for Biotechnology and Nanotechnology researcher Dr David Muller said effectively translating the dose could dramatically reduce the cost.

“A simple, easy-to-administer polio Nanopatch vaccine could increase the availability of the IPV vaccine and facilitate its administration in door-to-door and mass vaccination campaigns,” said Dr Muller.

“As recently as 1988, more than 350,000 cases occurred every year in more than 125 endemic countries.

“Concerted efforts to eradicate the disease have reduced incidence by more than 99 per cent.”

“Efforts are being intensified to eradicate the remaining strains of transmission once and for all.”

Data from the study encourages efforts by Vaxxas – established by UQ’s commercialisation company UniQuest – to bring the technology to use for human vaccinations.

“The research we are undertaking in conjunction with UQ and WHO can improve the reach of life-saving vaccines to children everywhere,” Vaxxas chief executive officer David Hoey said.

Here’s a link to and a citation for the paper,

High-density microprojection array delivery to rat skin of low doses of trivalent inactivated poliovirus vaccine elicits potent neutralising antibody responses by David A. Muller, Germain J. P. Fernando, Nick S. Owens, Christiana Agyei-Yeboah, Jonathan C. J. Wei, Alexandra C. I. Depelsenaire, Angus Forster, Paul Fahey, William C. Weldon, M. Steven Oberste, Paul R. Young, & Mark A. F. Kendall. Scientific Reports 7, Article number: 12644 (2017) doi:10.1038/s41598-017-13011-0 Published online: 03 October 2017

This paper is open access.

Should you be interested in seeing previous posts, just use ‘Nanopatch’ as your search term in the blog search engine.

Humans can distinguish molecular differences by touch

Yesterday, in my December 18, 2017 post about medieval textiles, I posed the question, “How did medieval artisans create nanoscale and microscale gilding when they couldn’t see it?” I realized afterwards that an answer to that question might be in this December 13, 2017 news item on ScienceDaily,

How sensitive is the human sense of touch? Sensitive enough to feel the difference between surfaces that differ by just a single layer of molecules, a team of researchers at the University of California San Diego has shown.

“This is the greatest tactile sensitivity that has ever been shown in humans,” said Darren Lipomi, a professor of nanoengineering and member of the Center for Wearable Sensors at the UC San Diego Jacobs School of Engineering, who led the interdisciplinary project with V. S. Ramachandran, director of the Center for Brain and Cognition and distinguished professor in the Department of Psychology at UC San Diego.

So perhaps those medieval artisans were able to feel the difference before it could be seen in the textiles they were producing?

Getting back to the matter at hand, a December 13, 2017 University of California at San Diego (UCSD) news release (also on EurekAlert) by Liezel Labios offers more detail about the work,

Humans can easily feel the difference between many everyday surfaces such as glass, metal, wood and plastic. That’s because these surfaces have different textures or draw heat away from the finger at different rates. But UC San Diego researchers wondered, if they kept all these large-scale effects equal and changed only the topmost layer of molecules, could humans still detect the difference using their sense of touch? And if so, how?

Researchers say this fundamental knowledge will be useful for developing electronic skin, prosthetics that can feel, advanced haptic technology for virtual and augmented reality and more.

Unsophisticated haptic technologies exist in the form of rumble packs in video game controllers or smartphones that shake, Lipomi added. “But reproducing realistic tactile sensations is difficult because we don’t yet fully understand the basic ways in which materials interact with the sense of touch.”

“Today’s technologies allow us to see and hear what’s happening, but we can’t feel it,” said Cody Carpenter, a nanoengineering Ph.D. student at UC San Diego and co-first author of the study. “We have state-of-the-art speakers, phones and high-resolution screens that are visually and aurally engaging, but what’s missing is the sense of touch. Adding that ingredient is a driving force behind this work.”

This study is the first to combine materials science and psychophysics to understand how humans perceive touch. “Receptors processing sensations from our skin are phylogenetically the most ancient, but far from being primitive they have had time to evolve extraordinarily subtle strategies for discerning surfaces—whether a lover’s caress or a tickle or the raw tactile feel of metal, wood, paper, etc. This study is one of the first to demonstrate the range of sophistication and exquisite sensitivity of tactile sensations. It paves the way, perhaps, for a whole new approach to tactile psychophysics,” Ramachandran said.

Super-Sensitive Touch

In a paper published in Materials Horizons, UC San Diego researchers tested whether human subjects could distinguish—by dragging or tapping a finger across the surface—between smooth silicon wafers that differed only in their single topmost layer of molecules. One surface was a single oxidized layer made mostly of oxygen atoms. The other was a single Teflon-like layer made of fluorine and carbon atoms. Both surfaces looked identical and felt similar enough that some subjects could not differentiate between them at all.

According to the researchers, human subjects can feel these differences because of a phenomenon known as stick-slip friction, which is the jerking motion that occurs when two objects at rest start to slide against each other. This phenomenon is responsible for the musical notes played by running a wet finger along the rim of a wine glass, the sound of a squeaky door hinge or the noise of a stopping train. In this case, each surface has a different stick-slip frequency due to the identity of the molecules in the topmost layer.

In one test, 15 subjects were tasked with feeling three surfaces and identifying the one surface that differed from the other two. Subjects correctly identified the differences 71 percent of the time.

In another test, subjects were given three different strips of silicon wafer, each strip containing a different sequence of 8 patches of oxidized and Teflon-like surfaces. Each sequence represented an 8-digit string of 0s and 1s, which encoded for a particular letter in the ASCII alphabet. Subjects were asked to “read” these sequences by dragging a finger from one end of the strip to the other and noting which patches in the sequence were the oxidized surfaces and which were the Teflon-like surfaces. In this experiment, 10 out of 11 subjects decoded the bits needed to spell the word “Lab” (with the correct upper and lowercase letters) more than 50 percent of the time. Subjects spent an average of 4.5 minutes to decode each letter.

“A human may be slower than a nanobit per second in terms of reading digital information, but this experiment shows a potentially neat way to do chemical communications using our sense of touch instead of sight,” Lipomi said.

Basic Model of Touch

The researchers also found that these surfaces can be differentiated depending on how fast the finger drags and how much force it applies across the surface. The researchers modeled the touch experiments using a “mock finger,” a finger-like device made of an organic polymer that’s connected by a spring to a force sensor. The mock finger was dragged across the different surfaces using multiple combinations of force and swiping velocity. The researchers plotted the data and found that the surfaces could be distinguished given certain combinations of velocity and force. Meanwhile, other combinations made the surfaces indistinguishable from each other.

“Our results reveal a remarkable human ability to quickly home in on the right combinations of forces and swiping velocities required to feel the difference between these surfaces. They don’t need to reconstruct an entire matrix of data points one by one as we did in our experiments,” Lipomi said.

“It’s also interesting that the mock finger device, which doesn’t have anything resembling the hundreds of nerves in our skin, has just one force sensor and is still able to get the information needed to feel the difference in these surfaces. This tells us it’s not just the mechanoreceptors in the skin, but receptors in the ligaments, knuckles, wrist, elbow and shoulder that could be enabling humans to sense minute differences using touch,” he added.

This work was supported by member companies of the Center for Wearable Sensors at UC San Diego: Samsung, Dexcom, Sabic, Cubic, Qualcomm and Honda.

For those who prefer their news by video,

Here’s a link to and a citation for the paper,

Human ability to discriminate surface chemistry by touch by Cody W. Carpenter, Charles Dhong, Nicholas B. Root, Daniel Rodriquez, Emily E. Abdo, Kyle Skelil, Mohammad A. Alkhadra, Julian Ramírez, Vilayanur S. Ramachandran and Darren J. Lipomi. Mater. Horiz., 2018, Advance Article DOI: 10.1039/C7MH00800G

This paper is open access but you do need to have opened a free account on the website.

Gold at the nanoscale in medieval textiles

It takes a while (i.e., you have to read the abstract for the paper) to get to the nanoscale part of the story. In the meantime, here are the broad brushstrokes (as it were) from a group of researchers in Hungary, from an Oct. 11, 2017 American Chemical Society (ACS) news release (also on EurekAlert),

Gold has long been valued for its luxurious glitter and hue, and threads of the gleaming metal have graced clothing and tapestries for centuries. Determining how artisans accomplished these adornments in the distant past can help scientists restore, preserve and date artifacts, but solutions to these puzzles have been elusive. Now scientists, reporting in ACS’ journal Analytical Chemistry, have revealed that medieval artisans used a gilding technology that has endured for centuries.

Researchers can learn a lot about vanished cultures from objects left behind. But one detail that has escaped understanding has been the manufacturing method of gold-coated silver threads found in textiles from the Middle Ages. Four decades of intensive research yielded some clues, but the findings have been very limited. Study of the materials has been hindered by their extremely small size: A single metal thread is sometimes only as thick as a human hair, and the thickness of its gold coating is a hundredth of that. Tamás G. Weiszburg, Katalin Gherdán and colleagues set out to fill this gap.

Using a suite of lab techniques, the researchers examined medieval gilded silver threads, and silver and gold strips produced during and after the Middle Ages. The items come from European cultures spanning the 13th to 17th centuries. The researchers characterized the chemistry of the silver thread, its gold coating, the interactions between the two and the shape of metal strips’ edges. To characterize the threads and strips, the researchers combined high-resolution scanning electron microscopy, electron back-scattered diffraction with energy-dispersive electron probe microanalysis and other analytical methods. Though previous studies indicated that these tiny objects were manufactured by a mercury-based method in fashion at that time, the new results suggest that the threads were gilded exclusively by using an ancient method that survived for a millennium. The goldsmiths simply heated and hammered the silver sheets and the gold foil together, and then cut them into strips. It was also possible to determine whether scissor- or knife-like tools were used for cutting. The results also show that this process was used widely in the region well into the 17th century.

The authors acknowledge funding from the European Social Fund.

Here’s an image of medieval bling,

Caption: A new study unravels how medieval artisans embellished textiles with gold. Credit: The American Chemical Society

Finally, here’s the abstract with the information about the nanoscale elements (link to paper follows abstract),

Although gilt silver threads were widely used for decorating historical textiles, their manufacturing techniques have been elusive for centuries. Contemporary written sources give only limited, sometimes ambiguous information, and detailed cross-sectional study of the microscale soft noble metal objects has been hindered by sample preparation. In this work, to give a thorough characterization of historical gilt silver threads, nano- and microscale textural, chemical, and structural data on cross sections, prepared by focused ion beam milling, were collected, using various electron-optical methods (high-resolution scanning electron microscopy (SEM), wavelength-dispersive electron probe microanalysis (EPMA), electron backscattered diffraction (EBSD) combined with energy-dispersive electron probe microanalysis (EDX), transmission electron microscopy (TEM) combined with EDX, and micro-Raman spectroscopy. The thickness of the gold coating varied between 70–400 nm [emphasis mine]. Data reveal nano- and microscale metallurgy-related, gilding-related and corrosion-related inhomogeneities in the silver base. These inhomogeneities account for the limitations of surface analysis when tracking gilding methods of historical metal threads, and explain why chemical information has to be connected to 3D texture on submicrometre scale. The geometry and chemical composition (lack of mercury, copper) of the gold/silver interface prove that the ancient gilding technology was diffusion bonding. The observed differences in the copper content of the silver base of the different thread types suggest intentional technological choice. Among the examined textiles of different ages (13th–17th centuries) and provenances narrow technological variation has been found.

Here’s a link to the paper,

Medieval Gilding Technology of Historical Metal Threads Revealed by Electron Optical and Micro-Raman Spectroscopic Study of Focused Ion Beam-Milled Cross Sections by Tamás G. Weiszburg, Katalin Gherdán, Kitti Ratter, Norbert Zajzon, Zsolt Bendő, György Radnóczi, Ágnes Takács, Tamás Váczi, Gábor Varga and György Szakmány. Anal. Chem., Article ASAP DOI: 10.1021/acs.analchem.7b01917 Publication Date (Web): September 19, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

One final comment, if you read the abstract, you’ll see how many technologies the researchers needed to use to examine the textiles. How did medieval artisans create nanoscale and microscale gilding when they couldn’t see it? I realize there are now some optical microscopes that can provide a view of the nanoscale but presumably those artisans of the Middle Ages did not have access to that kind of equipment. So, how did they create those textiles with the technology of the day?

Tracks of my tears could power smartphone?

So far the researchers aren’t trying to power anything with tears but they have discovered that tears could be used to generate electricity (from an Oct. 2, 2017 news item on phys.org),

A team of Irish scientists has discovered that applying pressure to a protein found in egg whites and tears can generate electricity. The researchers from the Bernal Institute, University of Limerick (UL), Ireland, observed that crystals of lysozyme, a model protein that is abundant in egg whites of birds as well as in the tears, saliva and milk of mammals can generate electricity when pressed. Their report is published today (October 2) in the journal, Applied Physics Letters.

An Oct. 2, 2017 University of Limerick press release (also on EurekAlert), which originated the news item, offers additional detail,

The ability to generate electricity by applying pressure, known as direct piezoelectricity, is a property of materials such as quartz that can convert mechanical energy into electrical energy and vice versa. Such materials are used in a variety of applications ranging from resonators and vibrators in mobile phones to deep ocean sonars and ultrasound imaging. Bone, tendon and wood are long known to possess piezoelectricity.

“While piezoelectricity is used all around us, the capacity to generate electricity from this particular protein had not been explored. The extent of the piezoelectricity in lysozyme crystals is significant. It is of the same order of magnitude found in quartz. However, because it is a biological material, it is non toxic so it could have many innovative applications such as electroactive anti-microbial coatings for medical implants,” explained Aimee Stapleton, the lead author and an Irish Research Council EMBARK Postgraduate Fellow in the Department of Physics and Bernal Institute of UL.

Crystals of lysozyme are easy to make from natural sources. “The high precision structure of lysozyme crystals has been known since 1965,” said structural biologist at UL and co-author Professor Tewfik Soulimane.
“In fact, it is the second protein structure and the first enzyme structure that was ever solved,” he added, “but we are the first to use these crystals to show the evidence of piezoelectricity”.

According to team leader Professor Tofail Syed of UL’s Department of Physics, “Crystals are the gold-standard for measuring piezoelectricity in non-biological materials. Our team has shown that the same approach can be taken in understanding this effect in biology. This is a new approach as scientists so far have tried to understand piezoelectricity in biology using complex hierarchical structures such as tissues, cells or polypeptides rather than investigating simpler fundamental building blocks”.

The discovery may have wide reaching applications and could lead to further research in the area of energy harvesting and flexible electronics for biomedical devices. Future applications of the discovery may include controlling the release of drugs in the body by using lysozyme as a physiologically mediated pump that scavenges energy from its surroundings. Being naturally biocompatible and piezoelectric, lysozyme may present an alternative to conventional piezoelectric energy harvesters, many of which contain toxic elements such as lead.

Professor Luuk van der Wielen, Director of Bernal Institute and Bernal Professor of Biosystems Engineering and Design expressed his delight at this breakthrough by UL scientists.

“The €109-million Bernal Institute has the ambition to impact the world on the basis of top science in an increasingly international context. The impact of this discovery in the field of biological piezoelectricity will be huge and Bernal scientists are leading from the front the progress in this field,” he said.

Here’s a link to and a citation for the paper,

The direct piezoelectric effect in the globular protein lysozyme featured by A. Stapleton, M. R. Noor, J. Sweeney, V. Casey, A. L. Kholkin, C. Silien, A. A. Gandhi, T. Soulimane, and S. A. M. Tofail. Appl. Phys. Lett. 111, 142902 (2017); doi: http://dx.doi.org/10.1063/1.4997446

This paper is open access.

As for Tracks of My Tears,