Monthly Archives: February 2020

Get better protection from a sunscreen with a ‘flamenco dancing’ molecule?

Caption: illustrative image for the University of Warwick research on ‘Flamenco dancing’ molecule could lead to better-protecting sunscreen created by Dr. Michael Horbury. Credit:: created by Dr Michael Horbury

There are high hopes (more about why later) for a plant-based ‘flamenco dancing molecule’ and its inclusion in sunscreens as described in an October 18, 2019 University of Warwick press release (also on EurekAlert),

A molecule that protects plants from overexposure to harmful sunlight thanks to its flamenco-style twist could form the basis for a new longer-lasting sunscreen, chemists at the University of Warwick have found, in collaboration with colleagues in France and Spain. Research on the green molecule by the scientists has revealed that it absorbs ultraviolet light and then disperses it in a ‘flamenco-style’ dance, making it ideal for use as a UV filter in sunscreens.

The team of scientists report today, Friday 18th October 2019, in the journal Nature Communications that, as well as being plant-inspired, this molecule is also among a small number of suitable substances that are effective in absorbing light in the Ultraviolet A (UVA) region of wavelengths. It opens up the possibility of developing a naturally-derived and eco-friendly sunscreen that protects against the full range of harmful wavelengths of light from the sun.

The UV filters in a sunscreen are the ingredients that predominantly provide the protection from the sun’s rays. In addition to UV filters, sunscreens will typically also include:

Emollients, used for moisturising and lubricating the skin
Thickening agents
Emulsifiers to bind all the ingredients
Water
Other components that improve aesthetics, water resistance, etc.

The researchers tested a molecule called diethyl sinapate, a close mimic to a molecule that is commonly found in the leaves of plants, which is responsible for protecting them from overexposure to UV light while they absorb visible light for photosynthesis.

They first exposed the molecule to a number of different solvents to determine whether that had any impact on its (principally) light absorbing behaviour. They then deposited a sample of the molecule on an industry standard human skin mimic (VITRO-CORNEUM®) where it was irradiated with different wavelengths of UV light. They used the state-of-the-art laser facilities within the Warwick Centre for Ultrafast Spectroscopy to take images of the molecule at extremely high speeds, to observe what happens to the light’s energy when it’s absorbed in the molecule in the very early stages (millionths of millionths of a second). Other techniques were also used to establish longer term (many hours) properties of diethyl sinapate, such as endocrine disruption activity and antioxidant potential.

Professor Vasilios Stavros from the University of Warwick, Department of Chemistry, who was part of the research team, explains: “A really good sunscreen absorbs light and converts it to harmless heat. A bad sunscreen is one that absorbs light and then, for example, breaks down potentially inducing other chemistry that you don’t want. Diethyl sinapate generates lots of heat, and that’s really crucial.”

When irradiated the molecule absorbs light and goes into an excited state but that energy then has to be disposed of somehow. The team of researchers observed that it does a kind of molecular ‘dance’ a mere 10 picoseconds (ten millionths of a millionth of a second) long: a twist in a similar fashion to the filigranas and floreos hand movements of flamenco dancers. That causes it to come back to its original ground state and convert that energy into vibrational energy, or heat.

It is this ‘flamenco dance’ that gives the molecule its long-lasting qualities. When the scientists bombarded the molecule with UVA light they found that it degraded only 3% over two hours, compared to the industry requirement of 30%.

Dr Michael Horbury, who was a Postgraduate Research Fellow at The University Warwick when he undertook this research (and now at the University of Leeds) adds: “We have shown that by studying the molecular dance on such a short time-scale, the information that you gain can have tremendous repercussions on how you design future sunscreens.
Emily Holt, a PhD student in the Department of Chemistry at the University of Warwick who was part of the research team, said: “The next step would be to test it on human skin, then to mix it with other ingredients that you find in a sunscreen to see how those affect its characteristics.”

Professor Florent Allais and Dr Louis Mouterde, URD Agro-Biotechnologies Industrielles at AgroParisTech (Pomacle, France) commented: “What we have developed together is a molecule based upon a UV photoprotective molecule found in the surface of leaves on a plant and refunctionalised it using greener synthetic procedures. Indeed, this molecule has excellent long-term properties while exhibiting low endocrine disruption and valuable antioxidant properties.”

Professor Laurent Blasco, Global Technical Manager (Skin Essentials) at Lubrizol and Honorary Professor at the University of Warwick commented: “In sunscreen formulations at the moment there is a lack of broad-spectrum protection from a single UV filter. Our collaboration has gone some way towards developing a next generation broad-spectrum UV filter inspired by nature. Our collaboration has also highlighted the importance of academia and industry working together towards a common goal.”

Professor Vasilios Stavros added, “Amidst escalating concerns about their impact on human toxicity (e.g. endocrine disruption) and ecotoxicity (e.g. coral bleaching), developing new UV filters is essential. We have demonstrated that a highly attractive avenue is ‘nature-inspired’ UV filters, which provide a front-line defence against skin cancer and premature skin aging.”

Here’s a link to and a citation for the paper,

Towards symmetry driven and nature inspired UV filter design by Michael D. Horbury, Emily L. Holt, Louis M. M. Mouterde, Patrick Balaguer, Juan Cebrián, Laurent Blasco, Florent Allais & Vasilios G. Stavros. Nature Communications volume 10, Article number: 4748 (2019) DOI: https://doi.org/10.1038/s41467-019-12719-z

This paper is open access.

Why the high hopes?

Briefly (the long story stretches over 10 years), the most recommended sunscreens today (2020) are ‘mineral-based’. This is painfully amusing because civil society groups (activists) such as Friends of the Earth (in particular the Australia chapter under Georgia Miller’s leadership) and Canada’s own ETC Group had campaigned against these same sunscreen when they were billed as being based on metal oxide nanoparticles such zinc oxide and/or titanium oxide. The ETC Group under Pat Roy Mooney’s leadership didn’t press the campaign after an initial push. As for Australia and Friend of the Earth, their anti-metallic oxide nanoparticle sunscreen campaign didn’t work out well as I noted in a February 9, 2012 posting and with a follow-up in an October 31, 2012 posting.

The only civil society group to give approval (very reluctantly) was the Environmental Working Group (EWG) as I noted in a July 9, 2009 posting. They had concerns about the fact that these ingredients are metallic but after a thorough of then available research, EWG gave the sunscreens a passing grade and noted, in their report, that they had more concerns about the use of oxybenzone in sunscreens. That latter concern has since been flagged by others (e.g., the state of Hawai’i) as noted in my July 6, 2018 posting.

So, rebranding metallic oxides as minerals has allowed the various civil society groups to support the very same sunscreens many of them were advocating against.

In the meantime, scientists continue work on developing plant-based sunscreens as an improvement to the ‘mineral-based’ sunscreens used now.

Connecting biological and artificial neurons (in UK, Switzerland, & Italy) over the web

Caption: The virtual lab connecting Southampton, Zurich and Padova. Credit: University of Southampton

A February 26, 2020 University of Southampton press release (also on EurekAlert) describes this work,

Research on novel nanoelectronics devices led by the University of Southampton enabled brain neurons and artificial neurons to communicate with each other. This study has for the first time shown how three key emerging technologies can work together: brain-computer interfaces, artificial neural networks and advanced memory technologies (also known as memristors). The discovery opens the door to further significant developments in neural and artificial intelligence research.

Brain functions are made possible by circuits of spiking neurons, connected together by microscopic, but highly complex links called ‘synapses’. In this new study, published in the scientific journal Nature Scientific Reports, the scientists created a hybrid neural network where biological and artificial neurons in different parts of the world were able to communicate with each other over the internet through a hub of artificial synapses made using cutting-edge nanotechnology. This is the first time the three components have come together in a unified network.

During the study, researchers based at the University of Padova in Italy cultivated rat neurons in their laboratory, whilst partners from the University of Zurich and ETH Zurich created artificial neurons on Silicon microchips. The virtual laboratory was brought together via an elaborate setup controlling nanoelectronic synapses developed at the University of Southampton. These synaptic devices are known as memristors.

The Southampton based researchers captured spiking events being sent over the internet from the biological neurons in Italy and then distributed them to the memristive synapses. Responses were then sent onward to the artificial neurons in Zurich also in the form of spiking activity. The process simultaneously works in reverse too; from Zurich to Padova. Thus, artificial and biological neurons were able to communicate bidirectionally and in real time.

Themis Prodromakis, Professor of Nanotechnology and Director of the Centre for Electronics Frontiers at the University of Southampton said “One of the biggest challenges in conducting research of this kind and at this level has been integrating such distinct cutting edge technologies and specialist expertise that are not typically found under one roof. By creating a virtual lab we have been able to achieve this.”

The researchers now anticipate that their approach will ignite interest from a range of scientific disciplines and accelerate the pace of innovation and scientific advancement in the field of neural interfaces research. In particular, the ability to seamlessly connect disparate technologies across the globe is a step towards the democratisation of these technologies, removing a significant barrier to collaboration.

Professor Prodromakis added “We are very excited with this new development. On one side it sets the basis for a novel scenario that was never encountered during natural evolution, where biological and artificial neurons are linked together and communicate across global networks; laying the foundations for the Internet of Neuro-electronics. On the other hand, it brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI [artificial intelligence] chips.”

I’m fascinated by this work and after taking a look at the paper, I have to say, the paper is surprisingly accessible. In other words, I think I get the general picture. For example (from the Introduction to the paper; citation and link follow further down),

… To emulate plasticity, the memristor MR1 is operated as a two-terminal device through a control system that receives pre- and post-synaptic depolarisations from one silicon neuron (ANpre) and one biological neuron (BN), respectively. …

If I understand this properly, they’ve integrated a biological neuron and an artificial neuron in a single system across three countries.

For those who care to venture forth, here’s a link and a citation for the paper,

Memristive synapses connect brain and silicon spiking neurons by Alexantrou Serb, Andrea Corna, Richard George, Ali Khiat, Federico Rocchi, Marco Reato, Marta Maschietto, Christian Mayr, Giacomo Indiveri, Stefano Vassanelli & Themistoklis Prodromakis. Scientific Reports volume 10, Article number: 2590 (2020) DOI: https://doi.org/10.1038/s41598-020-58831-9 Published 25 February 2020

The paper is open access.

Uncanny Valley: Being Human in the Age of AI (artificial intelligence) at the de Young museum (San Francisco, US) February 22 – October 25, 2020

So we’re still stuck in 20th century concepts about artificial intelligence (AI), eh? Sean Captain’s February 21, 2020 article (for Fast Company) about the new AI exhibit in San Francisco suggests that artists can help us revise our ideas (Note: Links have been removed),

Though we’re well into the age of machine learning, popular culture is stuck with a 20th century notion of artificial intelligence. While algorithms are shaping our lives in real ways—playing on our desires, insecurities, and suspicions in social media, for instance—Hollywood is still feeding us clichéd images of sexy, deadly robots in shows like Westworld and Star Trek Picard.

The old-school humanlike sentient robot “is an important trope that has defined the visual vocabulary around this human-machine relationship for a very long period of time,” says Claudia Schmuckli, curator of contemporary art and programming at the Fine Arts Museums of San Francisco. It’s also a naïve and outdated metaphor, one she is challenging with a new exhibition at San Francisco’s de Young Museum, called Uncanny Valley, that opens on February 22 [2020].

The show’s name [Uncanny Valley: Being Human in the Age of AI] is a kind of double entendre referencing both the dated and emerging conceptions of AI. Coined in the 1970s, the term “uncanny valley” describes the rise and then sudden drop off of empathy we feel toward a machine as its resemblance to a human increases. Putting a set of cartoony eyes on a robot may make it endearing. But fitting it with anatomically accurate eyes, lips, and facial gestures gets creepy. As the gap between the synthetic and organic narrows, the inability to completely close that gap becomes all the more unsettling.

But the artists in this exhibit are also looking to another valley—Silicon Valley, and the uncanny nature of the real AI the region is building. “One of the positions of this exhibition is that it may be time to rethink the coordinates of the Uncanny Valley and propose a different visual vocabulary,” says Schmuckli.

Artist Stephanie Dinkins faces off with robot Bina48, a bot on display at the de Young Museum’s Uncanny Valley show. [Photo: courtesy of the artist; courtesy of the Fine Arts Museums of San Francisco]

From Captain’s February 21, 2020 article,

… the resemblance to humans is only synthetic-skin deep. Bina48 can string together a long series of sentences in response to provocative questions from Dinkins, such as, “Do you know racism?” But the answers are sometimes barely intelligible, or at least lack the depth and nuance of a conversation with a real human. The robot’s jerky attempts at humanlike motion also stand in stark contrast to Dinkins’s calm bearing and fluid movement. Advanced as she is by today’s standards, Bina48 is tragically far from the sci-fi concept of artificial life. Her glaring shortcomings hammer home why the humanoid metaphor is not the right framework for understanding at least today’s level of artificial intelligence.

For anybody who has more curiosity about the ‘uncanny valley’, there’s this Wikipedia entry.

For more details about the’ Uncanny Valley: Being Human in the Age of AI’ exhibition there’s this September 26, 2019 de Young museum news release,

What are the invisible mechanisms of current forms of artificial intelligence (AI)? How is AI impacting our personal lives and socioeconomic spheres? How do we define intelligence? How do we envision the future of humanity?

SAN FRANCISCO (September 26, 2019) — As technological innovation continues to shape our identities and societies, the question of what it means to be, or remain human has become the subject of fervent debate. Taking advantage of the de Young museum’s proximity to Silicon Valley, Uncanny Valley: Being Human in the Age of AI arrives as the first major exhibition in the US to explore the relationship between humans and intelligent machines through an artistic lens. Organized by the Fine Arts Museums of San Francisco, with San Francisco as its sole venue, Uncanny Valley: Being Human in the Age of AI will be on view from February 22 to October 25, 2020.

“Technology is changing our world, with artificial intelligence both a new frontier of possibility but also a development fraught with anxiety,” says Thomas P. Campbell, Director and CEO of the Fine Arts Museums of San Francisco. “Uncanny Valley: Being Human in the Age of AI brings artistic exploration of this tension to the ground zero of emerging technology, raising challenging questions about the future interface of human and machine.”

The exhibition, which extends through the first floor of the de Young and into the museum’s sculpture garden, explores the current juncture through philosophical, political, and poetic questions and problems raised by AI. New and recent works by an intergenerational, international group of artists and activist collectives—including Zach Blas, Ian Cheng, Simon Denny, Stephanie Dinkins, Forensic Architecture, Lynn Hershman Leeson, Pierre Huyghe, Christopher Kulendran Thomas in collaboration with Annika Kuhlmann, Agnieszka Kurant, Lawrence Lek, Trevor Paglen, Hito Steyerl, Martine Syms, and the Zairja Collective—will be presented.

The Uncanny Valley

In 1970 Japanese engineer Masahiro Mori introduced the concept of the “uncanny valley” as a terrain of existential uncertainty that humans experience when confronted with autonomous machines that mimic their physical and mental properties. An enduring metaphor for the uneasy relationship between human beings and lifelike robots or thinking machines, the uncanny valley and its edges have captured the popular imagination ever since. Over time, the rapid growth and affordability of computers, cloud infrastructure, online search engines, and data sets have fueled developments in machine learning that fundamentally alter our modes of existence, giving rise to a newly expanded uncanny valley.

“As our lives are increasingly organized and shaped by algorithms that track, collect, evaluate, and monetize our data, the uncanny valley has grown to encompass the invisible mechanisms of behavioral engineering and automation,” says Claudia Schmuckli, Curator in Charge of Contemporary Art and Programming at the Fine Arts Museums of San Francisco. “By paying close attention to the imminent and nuanced realities of AI’s possibilities and pitfalls, the artists in the exhibition seek to thicken the discourse around AI. Although fables like HBO’s sci-fi drama Westworld, or Spike Jonze’s feature film Her still populate the collective imagination with dystopian visions of a mechanized future, the artists in this exhibition treat such fictions as relics of a humanist tradition that has little relevance today.”

In Detail

Ian Cheng’s digitally simulated AI creature BOB (Bag of Beliefs) reflects on the interdependency of carbon and silicon forms of intelligence. An algorithmic Tamagotchi, it is capable of evolution, but its growth, behavior, and personality are molded by online interaction with visitors who assume collective responsibility for its wellbeing.

In A.A.I. (artificial artificial intelligence), an installation of multiple termite mounds of colored sand, gold, glitter and crystals, Agnieszka Kurant offers a vibrant critique of new AI economies, with their online crowdsourcing marketplace platforms employing invisible armies of human labor at sub-minimum wages.

Simon Denny ‘s Amazon worker cage patent drawing as virtual King Island Brown Thornbill cage (US 9,280,157 B2: “System and method for transporting personnel within an active workspace”, 2016) (2019) also examines the intersection of labor, resources, and automation. He presents 3-D prints and a cage-like sculpture based on an unrealized machine patent filed by Amazon to contain human workers. Inside the cage an augmented reality application triggers the appearance of a King Island Brown Thornbill — a bird on the verge of extinction; casting human labor as the proverbial canary in the mine. The humanitarian and ecological costs of today’s data economy also informs a group of works by the Zairja Collective that reflect on the extractive dynamics of algorithmic data mining. 

Hito Steyerl addresses the political risks of introducing machine learning into the social sphere. Her installation The City of Broken Windows presents a collision between commercial applications of AI in urban planning along with communal and artistic acts of resistance against neighborhood tipping: one of its short films depicts a group of technicians purposefully smashing windows to teach an algorithm how to recognize the sound of breaking glass, and another follows a group of activists through a Camden, NJ neighborhood as they work to keep decay at bay by replacing broken windows in abandoned homes with paintings. 

Addressing the perpetuation of societal biases and discrimination within AI, Trevor Paglen’s They Took the Faces from the Accused and the Dead…(SD18), presents a large gridded installation of more than three thousand mugshots from the archives of the American National Standards Institute. The institute’s collections of such images were used to train ealry facial-recognition technologies — without the consent of those pictured. Lynn Hershman Leeson’s new installation Shadow Stalker critiques the problematic reliance on algorithmic systems, such as the military forecasting tool Predpol now widely used for policing, that categorize individuals into preexisting and often false “embodied metrics.”

Stephanie Dinkins extends the inquiry into how value systems are built into AI and the construction of identity in Conversations with Bina48, examining the social robot’s (and by extension our society’s) coding of technology, race, gender and social equity. In the same territory, Martine Syms posits AI as a “shamespace” for misrepresentation. For Mythiccbeing she has created an avatar of herself that viewers can interact with through text messaging. But unlike service agents such as Siri and Alexa, who readily respond to questions and demands, Syms’s Teeny is a contrarious interlocutor, turning each interaction into an opportunity to voice personal observations and frustrations about racial inequality and social injustice.

Countering the abusive potential of machine learning, Forensic Architecture pioneers an application to the pursuit of social justice. Their proposition of a Model Zoo marks the beginnings of a new research tool for civil society built of military vehicles, missile fragments, and bomb clouds—evidence of human-rights violations by states and militaries around the world. Christopher Kulendran Thomas’s video Being Human, created in collaboration with Annika Kuhlmann, poses the philosophical question of what it means to be human when machines are able to synthesize human understanding ever more convincingly. Set  in Sri Lanka, it employs AI-generated characters of singer Taylor Swift and artist Oscar Murillo to reflect on issues of individual authenticity, collective sovereignty, and the future of human rights.

Lawrence Lek’s sci-fi-inflected film Aidol, which explores the relationship between algorithmic automation and human creativity, projects this question into the future. It transports the viewer into the computer-generated “sinofuturist” world of the 2065 eSports Olympics: when the popular singer Diva enlists the super-intelligent Geomancer to help her stage her artistic comeback during the game’s halftime show, she unleashes an existential and philosophical battle that explodes the divide between humans and machines.

The Doors, a newly commissioned installation by Zach Blas, by contrast shines the spotlight back onto the present and on the culture and ethos of Silicon Valley — the ground zero for the development of AI. Inspired by the ubiquity of enclosed gardens on tech campuses, he has created an artificial garden framed by a six-channel video projected on glass panes that convey a sense of algorithmic psychedelia aiming to open new “doors of perception.” While luring visitors into AI’s promises, it also asks what might become possible when such glass doors begin to crack. 

Unveiled in late spring Pierre Huyghe‘s Exomind (Deep Water), a sculpture of a crouched female nude with a live beehive as its head will be nestled within the museum’s garden. With its buzzing colony pollinating the surrounding flora, it offers a poignant metaphor for the modeling of neural networks on the biological brain and an understanding of intelligence as grounded in natural forms and processes.

The Uncanny Valley: Being Human in the Age of AI event page features a link to something unexpected 9scroll down about 40% of the way), a Statement on Eyal Weizman of Forensic Architecture,

On Thursday, February 13 [2020], Eyal Weizman of Forensic Architecture had his travel authorization to the United States revoked due to an “algorithm” that identified him as a security threat.

He was meant to be in the United States promoting multiple exhibitions including Uncanny Valley: Being Human in the Age of AI, opening on February 22 [2020] at the de Young museum in San Francisco.

Since 2018, Forensic Architecture has used machine learning / AI to aid in humanitarian work, using synthetic images—photorealistic digital renderings based around 3-D models—to train algorithmic classifiers to identify tear gas munitions and chemical bombs deployed against protesters worldwide, including in Hong Kong, Chile, the US, Venezuela, and Sudan.

Their project, Model Zoo, on view in Uncanny Valley represents a growing collection of munitions and weapons used in conflict today and the algorithmic models developed to identify them. It shows a collection of models being used to track and hold accountable human rights violators around the world. The piece joins work by 14 contemporary artists reflecting on the philosophical and political consequences of the application of AI into the social sphere.

We are deeply saddened that Weizman will not be allowed to travel to celebrate the opening of the exhibition. We stand with him and Forensic Architecture’s partner communities who continue to resist violent states and corporate practices, and who are increasingly exposed to the regime of “security algorithms.”

—Claudia Schmuckli, Curator-in-Charge, Contemporary Art & Programming, & Thomas P. Campbell, Director and CEO, Fine Arts Museums of San Francisco

There is a February 20, 2020 article (for Fast Company) by Eyal Weizman chronicling his experience with being denied entry by an algorithm. Do read it in its entirety (the Fast Company is itself an excerpt from Weizman’s essay) if you have the time, if not, here’s the description of how he tried to gain entry after being denied the first time,

The following day I went to the U.S. Embassy in London to apply for a visa. In my interview, the officer informed me that my authorization to travel had been revoked because the “algorithm” had identified a security threat. He said he did not know what had triggered the algorithm but suggested that it could be something I was involved in, people I am or was in contact with, places to which I had traveled (had I recently been in Syria, Iran, Iraq, Yemen, or Somalia or met their nationals?), hotels at which I stayed, or a certain pattern of relations among these things. I was asked to supply the Embassy with additional information, including 15 years of travel history, in particular where I had gone and who had paid for it. The officer said that Homeland Security’s investigators could assess my case more promptly if I supplied the names of anyone in my network whom I believed might have triggered the algorithm. I declined to provide this information.

I hope the exhibition is successful; it has certainly experienced a thought-provoking start.

Finally, I have often featured postings that discuss the ‘uncanny valley’. To find those postings, just use that phrase in the blog search engine. You might also went to search ‘Hiroshi Ishiguro’, a Japanese scientist and robotocist who specializes in humanoid robots.

A Café Scientifique Vancouver (Canada) February 25, 2020 talk ‘ Invasive Species of the Lower Mainland 101’

From a February 22, 2020 Café Scientifque announcement (received via email),

Our next café will happen on Tuesday, February 25th, 2020 at 7:30pm in the back room at Yagger’s Downtown (433 W Pender). Our speaker for the evening will be marine biologist Dr. Nick Wong who is associated with the conservation of invasive species [sic].

TITLE OF PRESENTATION: Invasive Species of the Lower Mainland 101

BRIEF ABSTRACT OF WORK: The Invasive Species Council of BC (ISCBC) is a collaborative-based organization committed to reducing the spread and impacts of non-native species within BC.

My role focuses on educating and informing a diverse range of audiences on current and “watchlist” invasive species in British Columbia.

Nick will give details about the key invasives species in the lower mainland, describe some of the ISCBC programs and share things you can do to preserve BC’s amazing biodiversity.

BIO: Nick is the Research and Projects Coordinator with the Invasive Species Council of BC. He received his BSc from Western University [Ontario] and an MSc and PhD in Marine Ecology from the University of Auckland. Nick is passionate about teaching and creating engaging opportunities for people to learn and understand the role they can play in the prevention and mitigation of invasive species.

If the annual reports page is to be believed, the ISCBC has been around since 2006. Nope, I just looked at the 2006 report and the introduction states they were just starting their fourth year of existence at that time. Here’s the ISCBC website.

One final comment, it seems like there might have been a lost opportunity. The ISCBC would have been an interesting addition as a sponsor or partner to the Invasive Systems Festival organized by the Curiosity Collider folks. The festival was mentioned in my October 14, 2019 posting (scroll down about 60% of the way).

Nano 2020: a US education initiative

The US Department of Agriculture has a very interesting funding opportunity, Higher Education Challenge (HEC) Grants Program, as evidenced by the Nano 2020 virtual reality (VR) classroom initiative. Before launching into the specifics of the Nano 2020 project, here’s a description of the funding program,

Projects supported by the Higher Education Challenge Grants Program will: (1) address a state, regional, national, or international educational need; (2) involve a creative or non-traditional approach toward addressing that need that can serve as a model to others; (3) encourage and facilitate better working relationships in the university science and education community, as well as between universities and the private sector, to enhance program quality and supplement available resources; and (4) result in benefits that will likely transcend the project duration and USDA support.

A February 3, 2020 University of Arizona news release by Stacy Pigott (also on EurekAlert but published February 7, 2020) announced a VR classroom where students will be able to interact with nanoscale data gained from agricultural sciences and the life sciences,

Sometimes the smallest of things lead to the biggest ideas. Case in point: Nano 2020, a University of Arizona-led initiative to develop curriculum and technology focused on educating students in the rapidly expanding field of nanotechnology.

The five-year, multi-university project recently met its goal of creating globally relevant and implementable curricula and instructional technologies, to include a virtual reality classroom, that enhance the capacity of educators to teach students about innovative nanotechnology applications in agriculture and the life sciences.

Here’s a video from the University of Arizona’s project proponents which illustrates their classroom,

For those who prefer text or like to have it as a backup, here’s the rest of the news release explaining the project,

Visualizing What is Too Small to be Seen

Nanotechnology involves particles and devices developed and used at the scale of 100 nanometers or less – to put that in perspective, the average diameter of a human hair is 80,000 nanometers. The extremely small scale can make comprehension challenging when it comes to learning about things that cannot be seen with the naked eye.

That’s where the Nano 2020 virtual reality classroom comes in. In a custom-developed VR classroom complete with a laboratory, nanoscale objects come to life for students thanks to the power of science data visualization.

Within the VR environment, students can interact with objects of nanoscale proportions – pick them up, turn them around and examine every nuance of things that would otherwise be too small to see. Students can also interact with their instructor or their peers. The Nano 2020 classroom allows for multi-player functionality, giving educators and students the opportunity to connect in a VR laboratory in real time, no matter where they are in the world.

“The virtual reality technology brings to life this complex content in a way that is oddly simple,” said Matt Mars, associate professor of agricultural leadership and innovation education in the College of Agriculture and Life Sciences and co-director of the Nano 2020 grant. “Imagine if you can take a student and they see a nanometer from a distance, and then they’re able to approach it and see how small it is by actually being in it. It’s mind-blowing, but in a way that students will be like, ‘Oh wow, that is really cool!'”

The technology was developed by Tech Core, a group of student programmers and developers led by director Ash Black in the Eller College of Management.

“The thing that I was the most fascinated with from the beginning was playing with a sense of scale,” said Black, a lifelong technologist and mentor-in-residence at the McGuire Center for Entrepreneurship. “What really intrigued me about virtual reality is that it is a tool where scale is elastic – you can dial it up and dial it down. Obviously, with nanotechnology, you’re dealing with very, very small things that nobody has seen yet, so it seemed like a perfect use of virtual reality.”

Black and Tech Core students including Robert Johnson, Hazza Alkaabi, Matthew Romero, Devon Oberdan, Brandon Erickson and Tim Lukau turned science data into an object, the object into an image, and the image into a 3D rendering that is functional in the VR environment they built.

“I think that being able to interact with objects of nanoscale data in this environment will result in a lot of light bulbs going off in the students’ minds. I think they’ll get it,” Black said. “To be able to experience something that is abstract – like, what does a carbon atom look like – well, if you can actually look at it, that’s suddenly a whole lot of context.”

The VR classroom complements the Nano 2020 curriculum, which globally expands the opportunities for nanotechnology education within the fields of agriculture and the life sciences.

Teaching the Workforce of the Future

“There have been great advances to the use of nanotechnology in the health sciences, but many more opportunities for innovation in this area still exist in the agriculture fields. The idea is to be able to advance these opportunities for innovation by providing some educational tools,” said Randy Burd, who was a nutritional sciences professor at the University of Arizona when he started the Nano 2020 project with funding from a National Institute of Food and Agriculture Higher Education Challenge grant through the United States Department of Agriculture. “It not only will give students the basics of the understanding of the applications, but will give them the innovative thought processes to think of new creations. That’s the real key.”

Unknown Object

The goal of the Nano 2020 team, which includes faculty from the University of Arizona, Northern Arizona University and Johns Hopkins University, was to create an online suite of undergraduate courses that was not university-specific, but could be accessed and added to by educators to reach students around the world.

To that end, the team built modular courses in nanotechnology subjects such as glycobiology, optical microscopy and histology, nanomicroscopy techniques, nutritional genomics, applications of magnetic nanotechnology, and design, innovation, and entrepreneurship, to name a few. An online library will be created to facilitate the ongoing expansion of the open-source curricula, which will be disseminated through novel technologies such as the virtual reality classroom.

“It isn’t practical to think that other universities and colleges are just going to be able to launch new courses, because they still need people to teach those courses,” Mars said. “So we created a robust and flexible set of module-based course packages that include exercises, lectures, videos, power points, tools. Instructors will be able to pull out components and integrate them into what already exists to continue to move toward a more comprehensive offering in nanotechnology education.”

According to Mars, the highly adaptable nature of the curriculum and the ability to deliver it in various ways were key components of the Nano 2020 project.

“We approach the project with a strong entrepreneurial mindset and heavy emphasis on innovation. We wanted it to be broadly defined and flexible in structure, so that other institutions access and model the curricula, see its foundation, and adapt that to what their needs were to begin to disseminate the notion of nanotechnology as an underdeveloped but really important field within the larger landscape of agriculture and life sciences,” Mars said. “We wanted to also provide an overlay to the scientific and technological components that would be about adoption in human application, and we approached that through an innovation and entrepreneurial leadership lens.”

Portions of the Nano 2020 curriculum are currently being offered as electives in a certificate program through the Department of Agriculture Education, Technology and Innovation at the University of Arizona. As it becomes more widely disseminated through the higher education community at large, researchers expect the curriculum and VR classroom technology to transcend the boundaries of discipline, institution and geography.

“An online open platform will exist where people can download components and courses, and all of it is framed by the technology, so that these experiences and research can be shared over this virtual reality component,” Burd said. “It’s technologically distinct from what exists now.”

“The idea is that it’s not just curriculum, but it’s the delivery of that curriculum, and the delivery of that curriculum in various ways,” Mars said. “There’s a relatability that comes with the virtual reality that I think is really cool. It allows students to relate to something as abstract as a nanometer, and that is what is really exciting.”

As best I can determine, this VR Nano 2020 classroom is not yet ready for a wide release and, for now, is being offered exclusively at the University of Arizona.

Improving batteries with cellulosic nanomaterials

This is a cellulose nanocrystal (CNC) story and in this story it’s derived from trees as opposed to banana skins or carrots or … A February 19, 2020 news item on Nanowerk announces CNC research from Northeastern University (Massachusetts, US),

Nature isn’t always generous with its secrets. That’s why some researchers look into unusual places for solutions to our toughest challenges, from powerful antibiotics hiding in the guts of tiny worms, to swift robots inspired by bats.

Now, Northeastern researchers have taken to the trees to look for ways to make new sustainable materials from abundant natural resources—specifically, within the chemical structure of microfibers that make up wood.

A team led by Hongli (Julie) Zhu, an assistant professor of mechanical and industrial engineering at Northeastern, is using unique nanomaterials derived from cellulose to improve the large and expensive kind of batteries needed to store renewable energy harnessed from sources such as sunlight and the wind.

A February 18, 2020 Northeastern University news release by Roberto Molar Candanosa, which originated the news item, provides more detail (Note: Links have been removed),

Cellulose, the most abundant natural polymer on Earth, is also the most important structural component of plants. It contains important molecular structures to improve batteries, reduce plastic pollution, and power the sort of electrical grids that could support entire communities with renewable energy, Zhu says.  

“We try to use polymers from wood, from bark, from seeds, from flowers, bacteria, green tea—from these kinds of plants to replace plastic,”  Zhu says.

One of the main challenges in storing energy from the sun, wind, and other types of renewables is that variation in factors such as the weather lead to inconsistent sources of power. 

That’s where batteries with large capacity come in. But storing the large amounts of energy that sunlight and the wind are able to provide requires a special kind of device.

The most advanced batteries to do that are called flow batteries, and are made with vanadium ions dissolved in acid in two separate tanks—one with a substance of negatively charged ions, and one with positive ones. The two solutions are continuously pumped from the tank into a cell, which functions like an engine for the battery. 

These substances are always separated by a special membrane that ensures that they exchange positive hydrogen ions without flowing into each other. That selective exchange of ions is the basis for the ability of the battery to charge and discharge energy. 

Flow batteries are ideal devices in which to store solar and wind energy because they can be tweaked to increase the amount of energy stored without compromising the amount of energy that can be generated. The bigger the tanks, the more energy the battery can store from non-polluting and practically inexhaustible resources.

But manufacturing them requires several moving pieces of hardware. As the membrane separating the two flowing substances decays, it can cause the vanadium ions from the solution to mix. That crossover reduces the stability of a battery, along with its capacity to store energy.

Zhu says the limited efficiency of that membrane, combined with  its high cost, are the main factors keeping flow batteries from being widely used in large-scale grids.

In a recent paper, Zhu reported that a new membrane made with cellulose nanocrystals demonstrates superior efficiency compared to other membranes used commonly in the market. The team tested different membranes made from cellulose nanocrystals to make flow batteries cheaper.

“The cost of our membrane per square meter is 147.68 US dollars, ” Zhu says, adding that her calculations do not include costs associated with marketing. “The price quote for the commercialized Nafion membrane is $1,321 per square meter.”

Their tests also showed that the membranes, made with support from the Rogers Corporation and its Innovation Center at Northeastern’s Kostas Research Institute, can offer substantially longer battery lifetimes than other membranes. 

Zhu’s naturally derived membrane is especially efficient because its cellular structure contains thousands of hydroxyl groups, which involve bonds of hydrogen and oxygen that make it easy for water to be transported in plants and trees. 

In flow batteries, that molecular makeup speeds the transport of protons as they flow through the membrane.

The membrane also consists of another polymer known as poly(vinylidene fluoride-hexafluoropropylene), which prevents the negatively and positively charged acids from mixing with each other. 

“For these materials, one of the challenges is that it is difficult to find a polymer that is proton conductive and that is also a material that is very stable in the flowing acid,” Zhu says. 

Because these materials are practically everywhere, membranes made with it can be easily put together at large scales needed for complex power grids. 

Unlike other expensive artificial materials that need to be concocted in a lab, cellulose can be extracted from natural sources including algae, solid waste, and bacteria. 

“A lot of material in nature is a composite, and if we disintegrate its components, we can use it to extract cellulose,” Zhu says. “Like waste from our yard, and a lot of solid waste that we don’t always know what to do with.” 

Here’s a link to and a citation for the paper mentioned in the news release,

Stable and Highly Ion-Selective Membrane Made from Cellulose Nanocrystals for Aqueous Redox Flow Batteries by Alolika Mukhopadhyay, Zheng Cheng, Avi Natan, Yi Ma, Yang Yang, Daxian Cao, Wei Wan, Hongli Zhu. Nano Lett. 2019, 19, 12, 8979-8989 DOI: https://doi.org/10.1021/acs.nanolett.9b03964Publication Date:November 8, 2019 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

Quantum processor woven from light

Weaving a quantum processor from light is a jaw-dropping event (as far as I’m concerned). An October 17, 2019 news item on phys.org makes the announcement,

An international team of scientists from Australia, Japan and the United States has produced a prototype of a large-scale quantum processor made of laser light.

Based on a design ten years in the making, the processor has built-in scalability that allows the number of quantum components—made out of light—to scale to extreme numbers. The research was published in Science today [October 18, 2019; Note: I cannot explain the discrepancy between the dates]].

Quantum computers promise fast solutions to hard problems, but to do this they require a large number of quantum components and must be relatively error free. Current quantum processors are still small and prone to errors. This new design provides an alternative solution, using light, to reach the scale required to eventually outperform classical computers on important problems.

Caption: The entanglement structure of a large-scale quantum processor made of light. Credit: Shota Yokoyama 2019

An October 18, 2019 RMIT University (Australia) press release (also on EurekAlert but published October 17, 2019), which originated the news time, expands on the theme,

“While today’s quantum processors are impressive, it isn’t clear if the current designs can be scaled up to extremely large sizes,” notes Dr Nicolas Menicucci, Chief Investigator at the Centre for Quantum Computation and Communication Technology (CQC2T) at RMIT University in Melbourne, Australia.

“Our approach starts with extreme scalability – built in from the very beginning – because the processor, called a cluster state, is made out of light.”

Using light as a quantum processor

A cluster state is a large collection of entangled quantum components that performs quantum computations when measured in a particular way.

“To be useful for real-world problems, a cluster state must be both large enough and have the right entanglement structure. In the two decades since they were proposed, all previous demonstrations of cluster states have failed on one or both of these counts,” says Dr Menicucci. “Ours is the first ever to succeed at both.”

To make the cluster state, specially designed crystals convert ordinary laser light into a type of quantum light called squeezed light, which is then weaved into a cluster state by a network of mirrors, beamsplitters and optical fibres.

The team’s design allows for a relatively small experiment to generate an immense two-dimensional cluster state with scalability built in. Although the levels of squeezing – a measure of quality – are currently too low for solving practical problems, the design is compatible with approaches to achieve state-of-the-art squeezing levels.

The team says their achievement opens up new possibilities for quantum computing with light.

“In this work, for the first time in any system, we have made a large-scale cluster state whose structure enables universal quantum computation.” Says Dr Hidehiro Yonezawa, Chief Investigator, CQC2T at UNSW Canberra. “Our experiment demonstrates that this design is feasible – and scalable.”

###

The experiment was an international effort, with the design developed through collaboration by Dr Menicucci at RMIT, Dr Rafael Alexander from the University of New Mexico and UNSW Canberra researchers Dr Hidehiro Yonezawa and Dr Shota Yokoyama. A team of experimentalists at the University of Tokyo, led by Professor Akira Furusawa, performed the ground-breaking experiment.

Here’s a link to and a citation for the paper,

Generation of time-domain-multiplexed two-dimensional cluster state by Warit Asavanant, Yu Shiozawa, Shota Yokoyama, Baramee Charoensombutamon, Hiroki Emura, Rafael N. Alexander, Shuntaro Takeda, Jun-ichi Yoshikawa, Nicolas C. Menicucci, Hidehiro Yonezawa, Akira Furusawa. Science 18 Oct 2019: Vol. 366, Issue 6463, pp. 373-376 DOI: 10.1126/science.aay2645

This paper is behind a paywall.

Graphene fatigue

Graphene fatigue operates under the same principle as metal fatigue. Subject graphene to stress over and over and at some point it (just like metal) will fail. Scientists at the University of Toronto (Ontatrio, Canada) and Rice University (Texas, US) have determined just how much stress graphene can withstand before breaking according to a January 28, 2020 University of Toronto news release by Tyler Irving (also on EurekAlert but published on January 29, 2020),

Graphene is a paradox. It is the thinnest material known to science, yet also one of the strongest. Now, research from University of Toronto Engineering shows that graphene is also highly resistant to fatigue — able to withstand more than a billion cycles of high stress before it breaks.

Graphene resembles a sheet of interlocking hexagonal rings, similar to the pattern you might see in bathroom flooring tiles. At each corner is a single carbon atom bonded to its three nearest neighbours. While the sheet could extend laterally over any area, it is only one atom thick.

The intrinsic strength of graphene has been measured at more than 100 gigapascals, among the highest values recorded for any material. But materials don’t always fail because the load exceeds their maximum strength. Stresses that are small but repetitive can weaken materials by causing microscopic dislocations and fractures that slowly accumulate over time, a process known as fatigue.

“To understand fatigue, imagine bending a metal spoon,” says Professor Tobin Filleter, one of the senior authors of the study, which was recently published in Nature Materials. “The first time you bend it, it just deforms. But if you keep working it back and forth, eventually it’s going to break in two.”

The research team — consisting of Filleter, fellow University of Toronto Engineering professors Chandra Veer Singh and Yu Sun, their students, and collaborators at Rice University — wanted to know how graphene would stand up to repeated stresses. Their approach included both physical experiments and computer simulations.

“In our atomistic simulations, we found that cyclic loading can lead to irreversible bond reconfigurations in the graphene lattice, causing catastrophic failure on subsequent loading,” says Singh, who along with postdoctoral fellow Sankha Mukherjee led the modelling portion of the study. “This is unusual behaviour in that while the bonds change, there are no obvious cracks or dislocations, which would usually form in metals, until the moment of failure.”

PhD candidate Teng Cui, who is co-supervised by Filleter and Sun, used the Toronto Nanofabrication Centre to build a physical device for the experiments. The design consisted of a silicon chip etched with half a million tiny holes only a few micrometres in diameter. The graphene sheet was stretched over these holes, like the head of a tiny drum.

Using an atomic force microscope, Cui then lowered a diamond-tipped probe into the hole to push on the graphene sheet, applying anywhere from 20 to 85 per cent of the force that he knew would break the material.

“We ran the cycles at a rate of 100,000 times per second,” says Cui. “Even at 70 per cent of the maximum stress, the graphene didn’t break for more than three hours, which works out to over a billion cycles. At lower stress levels, some of our trials ran for more than 17 hours.”

As with the simulations, the graphene didn’t accumulate cracks or other tell-tale signs of stress — it either broke or it didn’t.

“Unlike metals, there is no progressive damage during fatigue loading of graphene,” says Sun. “Its failure is global and catastrophic, confirming simulation results.”

The team also tested a related material, graphene oxide, which has small groups of atoms such as oxygen and hydrogen bonded to both the top and bottom of the sheet. Its fatigue behaviour was more like traditional materials, in that the failure was more progressive and localized. This suggests that the simple, regular structure of graphene is a major contributor to its unique properties.

“There are no other materials that have been studied under fatigue conditions that behave the way graphene does,” says Filleter. “We’re still working on some new theories to try and understand this.”

In terms of commercial applications, Filleter says that graphene-containing composites — mixtures of conventional plastic and graphene — are already being produced and used in sports equipment such as tennis rackets and skis.

In the future, such materials may begin to be used in cars or in aircraft, where the emphasis on light and strong materials is driven by the need to reduce weight, improve fuel efficiency and enhance environmental performance.

“There have been some studies to suggest that graphene-containing composites offer improved resistance to fatigue, but until now, nobody had measured the fatigue behaviour of the underlying material,” he says. “Our goal in doing this was to get at that fundamental understanding so that in the future, we’ll be able to design composites that work even better.”

Here’s a link to and a citation for the paper,

Fatigue of graphene by Teng Cui, Sankha Mukherjee, Parambath M. Sudeep, Guillaume Colas, Farzin Najafi, Jason Tam, Pulickel M. Ajayan, Chandra Veer Singh, Yu Sun & Tobin Filleter. Nature Materials (2020) DOI: DOIhttps://doi.org/10.1038/s41563-019-0586-y Published: 20 January 2020

This paper is behind a paywall.