Tag Archives: Australia

Get better protection from a sunscreen with a ‘flamenco dancing’ molecule?

Caption: illustrative image for the University of Warwick research on ‘Flamenco dancing’ molecule could lead to better-protecting sunscreen created by Dr. Michael Horbury. Credit:: created by Dr Michael Horbury

There are high hopes (more about why later) for a plant-based ‘flamenco dancing molecule’ and its inclusion in sunscreens as described in an October 18, 2019 University of Warwick press release (also on EurekAlert),

A molecule that protects plants from overexposure to harmful sunlight thanks to its flamenco-style twist could form the basis for a new longer-lasting sunscreen, chemists at the University of Warwick have found, in collaboration with colleagues in France and Spain. Research on the green molecule by the scientists has revealed that it absorbs ultraviolet light and then disperses it in a ‘flamenco-style’ dance, making it ideal for use as a UV filter in sunscreens.

The team of scientists report today, Friday 18th October 2019, in the journal Nature Communications that, as well as being plant-inspired, this molecule is also among a small number of suitable substances that are effective in absorbing light in the Ultraviolet A (UVA) region of wavelengths. It opens up the possibility of developing a naturally-derived and eco-friendly sunscreen that protects against the full range of harmful wavelengths of light from the sun.

The UV filters in a sunscreen are the ingredients that predominantly provide the protection from the sun’s rays. In addition to UV filters, sunscreens will typically also include:

Emollients, used for moisturising and lubricating the skin
Thickening agents
Emulsifiers to bind all the ingredients
Water
Other components that improve aesthetics, water resistance, etc.

The researchers tested a molecule called diethyl sinapate, a close mimic to a molecule that is commonly found in the leaves of plants, which is responsible for protecting them from overexposure to UV light while they absorb visible light for photosynthesis.

They first exposed the molecule to a number of different solvents to determine whether that had any impact on its (principally) light absorbing behaviour. They then deposited a sample of the molecule on an industry standard human skin mimic (VITRO-CORNEUM®) where it was irradiated with different wavelengths of UV light. They used the state-of-the-art laser facilities within the Warwick Centre for Ultrafast Spectroscopy to take images of the molecule at extremely high speeds, to observe what happens to the light’s energy when it’s absorbed in the molecule in the very early stages (millionths of millionths of a second). Other techniques were also used to establish longer term (many hours) properties of diethyl sinapate, such as endocrine disruption activity and antioxidant potential.

Professor Vasilios Stavros from the University of Warwick, Department of Chemistry, who was part of the research team, explains: “A really good sunscreen absorbs light and converts it to harmless heat. A bad sunscreen is one that absorbs light and then, for example, breaks down potentially inducing other chemistry that you don’t want. Diethyl sinapate generates lots of heat, and that’s really crucial.”

When irradiated the molecule absorbs light and goes into an excited state but that energy then has to be disposed of somehow. The team of researchers observed that it does a kind of molecular ‘dance’ a mere 10 picoseconds (ten millionths of a millionth of a second) long: a twist in a similar fashion to the filigranas and floreos hand movements of flamenco dancers. That causes it to come back to its original ground state and convert that energy into vibrational energy, or heat.

It is this ‘flamenco dance’ that gives the molecule its long-lasting qualities. When the scientists bombarded the molecule with UVA light they found that it degraded only 3% over two hours, compared to the industry requirement of 30%.

Dr Michael Horbury, who was a Postgraduate Research Fellow at The University Warwick when he undertook this research (and now at the University of Leeds) adds: “We have shown that by studying the molecular dance on such a short time-scale, the information that you gain can have tremendous repercussions on how you design future sunscreens.
Emily Holt, a PhD student in the Department of Chemistry at the University of Warwick who was part of the research team, said: “The next step would be to test it on human skin, then to mix it with other ingredients that you find in a sunscreen to see how those affect its characteristics.”

Professor Florent Allais and Dr Louis Mouterde, URD Agro-Biotechnologies Industrielles at AgroParisTech (Pomacle, France) commented: “What we have developed together is a molecule based upon a UV photoprotective molecule found in the surface of leaves on a plant and refunctionalised it using greener synthetic procedures. Indeed, this molecule has excellent long-term properties while exhibiting low endocrine disruption and valuable antioxidant properties.”

Professor Laurent Blasco, Global Technical Manager (Skin Essentials) at Lubrizol and Honorary Professor at the University of Warwick commented: “In sunscreen formulations at the moment there is a lack of broad-spectrum protection from a single UV filter. Our collaboration has gone some way towards developing a next generation broad-spectrum UV filter inspired by nature. Our collaboration has also highlighted the importance of academia and industry working together towards a common goal.”

Professor Vasilios Stavros added, “Amidst escalating concerns about their impact on human toxicity (e.g. endocrine disruption) and ecotoxicity (e.g. coral bleaching), developing new UV filters is essential. We have demonstrated that a highly attractive avenue is ‘nature-inspired’ UV filters, which provide a front-line defence against skin cancer and premature skin aging.”

Here’s a link to and a citation for the paper,

Towards symmetry driven and nature inspired UV filter design by Michael D. Horbury, Emily L. Holt, Louis M. M. Mouterde, Patrick Balaguer, Juan Cebrián, Laurent Blasco, Florent Allais & Vasilios G. Stavros. Nature Communications volume 10, Article number: 4748 (2019) DOI: https://doi.org/10.1038/s41467-019-12719-z

This paper is open access.

Why the high hopes?

Briefly (the long story stretches over 10 years), the most recommended sunscreens today (2020) are ‘mineral-based’. This is painfully amusing because civil society groups (activists) such as Friends of the Earth (in particular the Australia chapter under Georgia Miller’s leadership) and Canada’s own ETC Group had campaigned against these same sunscreen when they were billed as being based on metal oxide nanoparticles such zinc oxide and/or titanium oxide. The ETC Group under Pat Roy Mooney’s leadership didn’t press the campaign after an initial push. As for Australia and Friend of the Earth, their anti-metallic oxide nanoparticle sunscreen campaign didn’t work out well as I noted in a February 9, 2012 posting and with a follow-up in an October 31, 2012 posting.

The only civil society group to give approval (very reluctantly) was the Environmental Working Group (EWG) as I noted in a July 9, 2009 posting. They had concerns about the fact that these ingredients are metallic but after a thorough of then available research, EWG gave the sunscreens a passing grade and noted, in their report, that they had more concerns about the use of oxybenzone in sunscreens. That latter concern has since been flagged by others (e.g., the state of Hawai’i) as noted in my July 6, 2018 posting.

So, rebranding metallic oxides as minerals has allowed the various civil society groups to support the very same sunscreens many of them were advocating against.

In the meantime, scientists continue work on developing plant-based sunscreens as an improvement to the ‘mineral-based’ sunscreens used now.

Quantum processor woven from light

Weaving a quantum processor from light is a jaw-dropping event (as far as I’m concerned). An October 17, 2019 news item on phys.org makes the announcement,

An international team of scientists from Australia, Japan and the United States has produced a prototype of a large-scale quantum processor made of laser light.

Based on a design ten years in the making, the processor has built-in scalability that allows the number of quantum components—made out of light—to scale to extreme numbers. The research was published in Science today [October 18, 2019; Note: I cannot explain the discrepancy between the dates]].

Quantum computers promise fast solutions to hard problems, but to do this they require a large number of quantum components and must be relatively error free. Current quantum processors are still small and prone to errors. This new design provides an alternative solution, using light, to reach the scale required to eventually outperform classical computers on important problems.

Caption: The entanglement structure of a large-scale quantum processor made of light. Credit: Shota Yokoyama 2019

An October 18, 2019 RMIT University (Australia) press release (also on EurekAlert but published October 17, 2019), which originated the news time, expands on the theme,

“While today’s quantum processors are impressive, it isn’t clear if the current designs can be scaled up to extremely large sizes,” notes Dr Nicolas Menicucci, Chief Investigator at the Centre for Quantum Computation and Communication Technology (CQC2T) at RMIT University in Melbourne, Australia.

“Our approach starts with extreme scalability – built in from the very beginning – because the processor, called a cluster state, is made out of light.”

Using light as a quantum processor

A cluster state is a large collection of entangled quantum components that performs quantum computations when measured in a particular way.

“To be useful for real-world problems, a cluster state must be both large enough and have the right entanglement structure. In the two decades since they were proposed, all previous demonstrations of cluster states have failed on one or both of these counts,” says Dr Menicucci. “Ours is the first ever to succeed at both.”

To make the cluster state, specially designed crystals convert ordinary laser light into a type of quantum light called squeezed light, which is then weaved into a cluster state by a network of mirrors, beamsplitters and optical fibres.

The team’s design allows for a relatively small experiment to generate an immense two-dimensional cluster state with scalability built in. Although the levels of squeezing – a measure of quality – are currently too low for solving practical problems, the design is compatible with approaches to achieve state-of-the-art squeezing levels.

The team says their achievement opens up new possibilities for quantum computing with light.

“In this work, for the first time in any system, we have made a large-scale cluster state whose structure enables universal quantum computation.” Says Dr Hidehiro Yonezawa, Chief Investigator, CQC2T at UNSW Canberra. “Our experiment demonstrates that this design is feasible – and scalable.”

###

The experiment was an international effort, with the design developed through collaboration by Dr Menicucci at RMIT, Dr Rafael Alexander from the University of New Mexico and UNSW Canberra researchers Dr Hidehiro Yonezawa and Dr Shota Yokoyama. A team of experimentalists at the University of Tokyo, led by Professor Akira Furusawa, performed the ground-breaking experiment.

Here’s a link to and a citation for the paper,

Generation of time-domain-multiplexed two-dimensional cluster state by Warit Asavanant, Yu Shiozawa, Shota Yokoyama, Baramee Charoensombutamon, Hiroki Emura, Rafael N. Alexander, Shuntaro Takeda, Jun-ichi Yoshikawa, Nicolas C. Menicucci, Hidehiro Yonezawa, Akira Furusawa. Science 18 Oct 2019: Vol. 366, Issue 6463, pp. 373-376 DOI: 10.1126/science.aay2645

This paper is behind a paywall.

The latest math stars: honeybees!

Understanding the concept of zero—I still remember climbing that mountain, so to speak. It took the teacher quite a while to convince me that representing ‘nothing’ as a zero was worthwhile. In fact, it took the combined efforts of both my parents and the teacher to convince me to use zeroes as I was prepared to go without. The battle is long since over and I have learned to embrace zero.

I don’t think bees have to be convinced but they too may have a concept of zero. More about that later, here’s the latest abut bees and math from an October 10, 2019 news item on phys.org,

Start thinking about numbers and they can become large very quickly. The diameter of the universe is about 8.8×1023 km and the largest known number—googolplex, 1010100—outranks it enormously. Although that colossal concept was dreamt up by brilliant mathematicians, we’re still pretty limited when it comes to assessing quantities at a glance. ‘Humans have a threshold limit for instantly processing one to four elements accurately’, says Adrian Dyer from RMIT University, Australia; and it seems that we are not alone. Scarlett Howard from RMIT and the Université de Toulouse, France, explains that guppies, angelfish and even honeybees are capable of distinguishing between quantities of three and four, although the trusty insects come unstuck at finer differences; they fail to differentiate between four and five, which made her wonder. According to Howard, honeybees are quite accomplished mathematicians. ‘Recently, honeybees were shown to learn the rules of “less than” and “greater than” and apply these rules to evaluate numbers from zero to six’, she says. Maybe numeracy wasn’t the bees’ problem; was it how the question was posed? The duo publishes their discovery that bees can discriminate between four and five if the training procedure is correct in Journal of Experimental Biology.

An October 10, 2019 The Company of Biologists’ press release on EurekAlert, which originated the news item, refines the information with more detail,

Dyer explains that when animals are trained to distinguish between colours and objects, some training procedures simply reward the animals when they make the correct decision. In the case of the honeybees that could distinguish three from four, they received a sip of super-sweet sugar water when they made the correct selection but just a taste of plain water when they got it wrong. However, Dyer, Howard and colleagues Aurore Avarguès-Weber, Jair Garcia and Andrew Greentree knew there was an alternative strategy. This time, the bees would be given a bitter-tasting sip of quinine-flavoured water when they got the answer wrong. Would the unpleasant flavour help the honeybees to focus better and improve their maths?

‘[The] honeybees were very cooperative, especially when I was providing sugar rewards’, says Howard, who moved to France each April to take advantage the northern summer during the Australian winter, when bees are dormant. Training the bees to enter a Y-shaped maze, Howard presented the insects with a choice; a card featuring four shapes in one arm and a card featuring a different number of shapes (ranging from one to 10) in the other. During the first series of training sessions, Howard rewarded the bees with a sugary sip when they alighted correctly before the card with four shapes, in contrast to a sip of water when they selected the wrong card. However, when Howard trained a second set of bees she reproved them with a bitter-tasting sip of quinine when they chose incorrectly, rewarding the insects with sugar when they selected the card with four shapes. Once the bees had learned to pick out the card with four shapes, Howard tested whether they could distinguish the card with four shapes when offered a choice between it and cards with eight, seven, six or – the most challenging comparison – five shapes.

Not surprisingly, the bees that had only been rewarded during training struggled; they couldn’t even differentiate between four and eight shapes. However, when Howard tested the honeybees that had been trained more rigorously – receiving a quinine reprimand – their performance was considerably better, consistently picking the card with four shapes when offered a choice between it and cards with seven or eight shapes. Even more impressively, the bees succeeded when offered the more subtle choice between four and five shapes.

So, it seems that honeybees are better mathematicians than had been credited. Unlocking their ability was simply a matter of asking the question in the right way and Howard is now keen to find out just how far counting bees can go.

I’ll get to the link to and citation for the paper in a minute but first, I found more about bees and math (including zero) in this February 7, 2019 article by Jason Daley for The Smithsonian (Note: Links have been removed),

Bees are impressive creatures, powering entire ecosystems via pollination and making sweet honey at the same time, one of the most incredible substances in nature. But it turns out the little striped insects are also quite clever. A new study suggests that, despite having tiny brains, bees understand the mathematical concepts of addition and subtraction.

To test the numeracy of the arthropods, researchers set up unique Y-shaped math mazes for the bees to navigate, according to Nicola Davis at the The Guardian. Because the insects can’t read, and schooling them to recognize abstract symbols like plus and minus signs would be incredibly difficult, the researchers used color to indicate addition or subtraction. …

Fourteen bees spent between four and seven hours completing 100 trips through the mazes during training exercises with the shapes and numbers chosen at random. All of the bees appeared to learn the concept. Then, the bees were tested 10 times each using two addition and two subtraction scenarios that had not been part of the training runs. The little buzzers got the correct answer between 64 and 72 percent of the time, better than would be expected by chance.

Last year, the same team of researchers published a paper suggesting that bees could understand the concept of zero, which puts them in an elite club of mathematically-minded animals that, at a minimum, have the ability to perceive higher and lower numbers in different groups. Animals with this ability include frogs, lions, spiders, crows, chicken chicks, some fish and other species. And these are not the only higher-level skills that bees appear to possess. A 2010 study that Dyer [Adrian Dyer of RMIT University in Australia] also participated in suggests that bees can remember human faces using the same mechanisms as people. Bees also use a complex type of movement called the waggle dance to communicate geographical information to one other, another sophisticated ability packed into a brain the size of a sesame seed.

If researchers could figure out how bees perform so many complicated tasks with such a limited number of neurons, the research could have implications for both biology and technology, such as machine learning. …

Then again, maybe the honey makers are getting more credit than they deserve. Clint Perry, who studies invertebrate intelligence at the Bee Sensory and Behavioral Ecology Lab at Queen Mary University of London tells George Dvorsky at Gizmodo that he’s not convinced by the research, and he had similar qualms about the study that suggested bees can understand the concept of zero. He says the bees may not be adding and subtracting, but rather are simply looking for an image that most closely matches the initial one they see, associating it with the sugar reward. …

If you have the time and the interest, definitely check out Daley’s article.

Here’s a link to and a citation for the latest paper about honeybees and math,

Surpassing the subitizing threshold: appetitive–aversive conditioning improves discrimination of numerosities in honeybees by Scarlett R. Howard, Aurore Avarguès-Weber, Jair E. Garcia, Andrew D. Greentree, Adrian G. Dyer. Journal of Experimental Biology 2019 222: jeb205658 doi: 10.1242/jeb.205658 Published 10 October 2019

This paper is behind a paywall.

Finding killer bacteria with quantum dots and a smartphone

An August 5, 2019 news item on Nanowerk announces a new technology for detecting killer bacteria (Note: A link has been removed),

A combination of off-the-shelf quantum dots and a smartphone camera soon could allow doctors to identify antibiotic-resistant bacteria in just 40 minutes, potentially saving patient lives.

Staphylococcus aureus (golden staph), is a common form of bacterium that causes serious and sometimes fatal conditions such as pneumonia and heart valve infections. Of particular concern is a strain that does not respond to methicillin, the antibiotic of first resort, and is known as methicillin-resistant S. aureus, or MRSA.

Recent reports estimate that 700 000 deaths globally could be attributed to antimicrobial resistance, such as methicillin-resistance. Rapid identification of MRSA is essential for effective treatment, but current methods make it a challenging process, even within well-equipped hospitals.

Soon, however, that may change, using nothing except existing technology.

Researchers from Macquarie University and the University of New South Wales, both in Australia, have demonstrated a proof-of-concept device that uses bacterial DNA to identify the presence of Staphylococcus aureus positively in a patient sample – and to determine if it will respond to frontline antibiotics.

An August 12,2019 Macquarie University press release (also on EurekAlert but published August 4, 2019), which originated the news item, delves into the work,

In a paper published in the international peer-reviewed journal Sensors and Actuators B: Chemical the Macquarie University team of Dr Vinoth Kumar Rajendran, Professor Peter Bergquist and Associate Professor Anwar Sunna with Dr Padmavathy Bakthavathsalam (UNSW) reveal a new way to confirm the presence of the bacterium, using a mobile phone and some ultra-tiny semiconductor particles known as quantum dots.

“Our team is using Synthetic Biology and NanoBiotechnology to address biomedical challenges. Rapid and simple ways of identifying the cause of infections and starting appropriate treatments are critical for treating patients effectively,” says Associate Professor Anwar Sunna, head of the Sunna Lab at Macquarie University.

“This is true in routine clinical situations, but also in the emerging field of personalised medicine.”

The researchers’ approach identifies the specific strain of golden staph by using a method called convective polymerase chain reaction (or cPCR). This is a derivative of a widely -employed technique in which a small segment of DNA is copied thousands of times, creating multiple samples suitable for testing.

Vinoth Kumar and colleagues then subject the DNA copies to a process known as lateral flow immunoassay – a paper-based diagnostic tool used to confirm the presence or absence of a target biomarker. The researchers use probes fitted with quantum dots to detect two unique genes, that confirms the presence of methicillin resistance in golden staph

A chemical added at the PCR stage to the DNA tested makes the sample fluoresce when the genes are detected by the quantum dots – a reaction that can be captured easily using the camera on a mobile phone.

The result is a simple and rapid method of detecting the presence of the bacterium, while simultaneously ruling first-line treatment in or out.

Although currently at proof-of-concept stage, the researchers say their system which is powered by a simple battery is suitable for rapid detection in different settings.

“We can see this being used easily not only in hospitals, but also in GP clinics and at patient bedsides,” says lead author, Macquarie’s Vinoth Kumar Rajendran.

Here’s a link to and a citation for the paper,

Smartphone detection of antibiotic resistance using convective PCR and a lateral flow assay by Vinoth Kumar Rajendran, Padmavathy Bakthavathsalam, Peter L.Bergquist, Anwar Sunna. Sensors and Actuators B: Chemical Volume 298, 1 November 2019,126849 DOI: https://doi.org/10.1016/j.snb.2019.126849 Available online 23 July 2019

This paper is behind a paywall.

Dial-a-frog?

Frog and phone – Credit: Marta Yebra Alvarez

There is a ‘frogphone’ but you won’t be talking or communicating directly with frogs, instead you will get data about them, according to a December 6, 2019 British Ecological Society press release (also on EurekAlert),

Researchers have developed the ‘FrogPhone’, a novel device which allows scientists to call up a frog survey site and monitor them in the wild. The FrogPhone is the world’s first solar-powered remote survey device that relays environmental data to the observer via text messages, whilst conducting real-time remote acoustic surveys over the phone. These findings are presented in the British Ecological Society Journal Methods in Ecology and Evolution today [December 6, 2019].

The FrogPhone introduces a new concept that allows researchers to “call” a frog habitat, any time, from anywhere, once the device has been installed. The device has been developed at the University of New South Wales (UNSW) Canberra and the University of Canberra in collaboration with the Australian Capital Territory (ACT) and Region Frogwatch Program and the Australian National University.

The FrogPhone utilises 3G/4G cellular mobile data coverage and capitalises on the characteristic wideband audio of mobile phones, which acts as a carrier for frog calls. Real time frog calls can be transmitted across the 3G/4G network infrastructure, directly to the user’s phone. This supports clear sound quality and minimal background noise, allowing users to identify the calls of different frog species.

“We estimate that the device with its current microphone can detect calling frogs from a 100-150m radius” said lead author Dr. Adrian Garrido Sanchis, Associate Lecturer at UNSW Canberra. “The device allows us to monitor the local frog population with more frequency and ease, which is significant as frog species are widely recognised as indicators of environmental health” said the ACT and Region Frogwatch coordinator and co-author, Anke Maria Hoefer.

The FrogPhone unifies both passive acoustic and active monitoring methods, all in a waterproof casing. The system has a large battery capacity coupled to a powerful solar panel. It also contains digital thermal sensors to automatically collect environmental data such as water and air temperature in real-time. The FrogPhone uses an open-source platform which allows any researcher to adapt it to project-specific needs.

The system simulates the main features of a mobile phone device. The FrogPhone accepts incoming calls independently after three seconds. These three seconds allow time to activate the temperature sensors and measure the battery storage levels. All readings then get automatically texted to the caller’s phone.

Acoustic monitoring of animals generally involves either site visits by a researcher or using battery-powered passive acoustic devices, which record calls and store them locally on the device for later analysis. These often require night-time observation, when frogs are most active. Now, when researchers dial a device remotely, the call to the FrogPhone can be recorded indirectly and analysed later.

Ms. Hoefer remarked that “The FrogPhone will help to drastically reduce the costs and risks involved in remote or high intensity surveys. Its use will also minimize potential negative impacts of human presence at survey sites. These benefits are magnified with increasing distance to and inaccessibility of a field site.”

A successful field trial of the device was performed in Canberra from August 2017 to March 2018. Researchers used spectrograms, graphs which allow the visual comparison of the spectrum of frequencies of frog signals over time, to test the recording capabilities of the FrogPhone.

Ms. Hoefer commented that “The spectrogram comparison between the FrogPhone and the standard direct mobile phone methodology in the lab, for the calls of 9 different frog species, and the field tests have proven that the FrogPhone can be successfully used as a new alternative to conduct frog call surveys.”

The use of the current FrogPhone is limited to areas with adequate 3G/4G phone coverage. Secondly, to listen to frogs in a large area, several survey devices would be needed. In addition, it relies on exposure to sunlight.

Future additions to the FrogPhone could include a satellite communications module for poor signal areas, or the use of multidirectional microphones for large areas. Lead author Garrido Sanchis emphasized that “In densely vegetated areas the waterproof case of the FrogPhone allows the device to be installed as a floating device in the middle of a pond, to maximise solar access to recharge the batteries”.

Dr. Garrido Sanchis said “While initially tested in frogs, the technology used for the FrogPhone could easily be extended to capture other animal vocalisation (e.g. insects and mammals), expanding the applicability to a wide range of biodiversity conservation studies”.

Here’s what the FrogPhone looks like onsite,

The FrogPhone installed at the field site. Credit: Kumudu Munasinghe

Here’s a link to and a citation for the paper,

The FrogPhone: A novel device for real‐time frog call monitoring by Adrian, Garrido Sanchis, Lorenzo Bertolelli, Anke Maria Hoefer, Marta Yebra Alvarez, Kumudu Munasinghe. Methods in Ecology and Evolution https://doi.org/10.1111/2041-210X.13332 First published [online]: 04 December 2019

This paper is open access.

Using light to manipulate neurons

There are three (or more?) possible applications including neuromorphic computing for this new optoelectronic technology which is based on black phophorus. A July 16, 2019 news item on Nanowerk announces the research,

Researchers from RMIT University [Australia] drew inspiration from an emerging tool in biotechnology – optogenetics – to develop a device that replicates the way the brain stores and loses information.

Optogenetics allows scientists to delve into the body’s electrical system with incredible precision, using light to manipulate neurons so that they can be turned on or off.

The new chip is based on an ultra-thin material that changes electrical resistance in response to different wavelengths of light, enabling it to mimic the way that neurons work to store and delete information in the brain.

Caption: The new chip is based on an ultra-thin material that changes electrical resistance in response to different wavelengths of light. Credit: RMIT University

A July 17, 2019 RMIT University press release (also on EurekAlert but published on July 16, 2019), which originated the news item, expands on the theme,

Research team leader Dr Sumeet Walia said the technology moves us closer towards artificial intelligence (AI) that can harness the brain’s full sophisticated functionality.

“Our optogenetically-inspired chip imitates the fundamental biology of nature’s best computer – the human brain,” Walia said.

“Being able to store, delete and process information is critical for computing, and the brain does this extremely efficiently.

“We’re able to simulate the brain’s neural approach simply by shining different colours onto our chip.

“This technology takes us further on the path towards fast, efficient and secure light-based computing.

“It also brings us an important step closer to the realisation of a bionic brain – a brain-on-a-chip that can learn from its environment just like humans do.”

Dr Taimur Ahmed, lead author of the study published in Advanced Functional Materials, said being able to replicate neural behavior on an artificial chip offered exciting avenues for research across sectors.

“This technology creates tremendous opportunities for researchers to better understand the brain and how it’s affected by disorders that disrupt neural connections, like Alzheimer’s disease and dementia,” Ahmed said.

The researchers, from the Functional Materials and Microsystems Research Group at RMIT, have also demonstrated the chip can perform logic operations – information processing – ticking another box for brain-like functionality.

Developed at RMIT’s MicroNano Research Facility, the technology is compatible with existing electronics and has also been demonstrated on a flexible platform, for integration into wearable electronics.

How the chip works:

Neural connections happen in the brain through electrical impulses. When tiny energy spikes reach a certain threshold of voltage, the neurons bind together – and you’ve started creating a memory.

On the chip, light is used to generate a photocurrent. Switching between colors causes the current to reverse direction from positive to negative.

This direction switch, or polarity shift, is equivalent to the binding and breaking of neural connections, a mechanism that enables neurons to connect (and induce learning) or inhibit (and induce forgetting).

This is akin to optogenetics, where light-induced modification of neurons causes them to either turn on or off, enabling or inhibiting connections to the next neuron in the chain.

To develop the technology, the researchers used a material called black phosphorus (BP) that can be inherently defective in nature.

This is usually a problem for optoelectronics, but with precision engineering the researchers were able to harness the defects to create new functionality.

“Defects are usually looked on as something to be avoided, but here we’re using them to create something novel and useful,” Ahmed said.

“It’s a creative approach to finding solutions for the technical challenges we face.”

Here’s a link and a citation for the paper,

Multifunctional Optoelectronics via Harnessing Defects in Layered Black Phosphorus by Taimur Ahmed, Sruthi Kuriakose, Sherif Abbas,, Michelle J. S. Spencer, Md. Ataur Rahman, Muhammad Tahir, Yuerui Lu, Prashant Sonar, Vipul Bansal, Madhu Bhaskaran, Sharath Sriram, Sumeet Walia. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.201901991 First published (online): 17 July 2019

This paper is behind a paywall.

Graphene from gum trees

Caption: Eucalyptus bark extract has never been used to synthesise graphene sheets before. Courtesy: RMIT University

It’s been quite educational reading a June 24, 2019 news item on Nanowerk about deriving graphene from Eucalyptus bark (Note: Links have been removed),

Graphene is the thinnest and strongest material known to humans. It’s also flexible, transparent and conducts heat and electricity 10 times better than copper, making it ideal for anything from flexible nanoelectronics to better fuel cells.

The new approach by researchers from RMIT University (Australia) and the National Institute of Technology, Warangal (India), uses Eucalyptus bark extract and is cheaper and more sustainable than current synthesis methods (ACS Sustainable Chemistry & Engineering, “Novel and Highly Efficient Strategy for the Green Synthesis of Soluble Graphene by Aqueous Polyphenol Extracts of Eucalyptus Bark and Its Applications in High-Performance Supercapacitors”).

A June 24, 2019 RMIT University news release (also on EurekAlert), which originated the news item, provides a little more detail,

RMIT lead researcher, Distinguished Professor Suresh Bhargava, said the new method could reduce the cost of production from $USD100 per gram to a staggering $USD0.5 per gram.

“Eucalyptus bark extract has never been used to synthesise graphene sheets before and we are thrilled to find that it not only works, it’s in fact a superior method, both in terms of safety and overall cost,” said Bhargava.

“Our approach could bring down the cost of making graphene from around $USD100 per gram to just 50 cents, increasing it availability to industries globally and enabling the development of an array of vital new technologies.”

Graphene’s distinctive features make it a transformative material that could be used in the development of flexible electronics, more powerful computer chips and better solar panels, water filters and bio-sensors.

Professor Vishnu Shanker from the National Institute of Technology, Warangal, said the ‘green’ chemistry avoided the use of toxic reagents, potentially opening the door to the application of graphene not only for electronic devices but also biocompatible materials.

“Working collaboratively with RMIT’s Centre for Advanced Materials and Industrial Chemistry we’re harnessing the power of collective intelligence to make these discoveries,” he said.

A novel approach to graphene synthesis:

Chemical reduction is the most common method for synthesising graphene oxide as it allows for the production of graphene at a low cost in bulk quantities.

This method however relies on reducing agents that are dangerous to both people and the environment.

When tested in the application of a supercapacitor, the ‘green’ graphene produced using this method matched the quality and performance characteristics of traditionally-produced graphene without the toxic reagents.

Bhargava said the abundance of eucalyptus trees in Australia made it a cheap and accessible resource for producing graphene locally.

“Graphene is a remarkable material with great potential in many applications due to its chemical and physical properties and there’s a growing demand for economical and environmentally friendly large-scale production,” he said.

Here’s a link to and a citation for the paper,

Novel and Highly Efficient Strategy for the Green Synthesis of Soluble Graphene by Aqueous Polyphenol Extracts of Eucalyptus Bark and Its Applications in High-Performance Supercapacitors by Saikumar ManchalaV. S. R. K. Tandava, Deshetti Jampaiah, Suresh K. Bhargava, Vishnu Shanker. ACS Sustainable Chem. Eng.2019XXXXXXXXXX-XXX DOI: https://doi.org/10.1021/acssuschemeng.9b01506 Publication Date:June 13, 2019

Copyright © 2019 American Chemical Society

This paper is behind a paywall.

Monitoring forest soundscapes for conservation and more about whale songs

I don’t understand why anyone would publicize science work featuring soundscapes without including an audio file. However, no one from Princeton University (US) phoned and asked for my advice :).

On the plus side, my whale story does have a sample audio file. However, I’m not sure if I can figure out how to embed it here.

Princeton and monitoring forests

In addition to a professor from Princeton University, there’s the founder of an environmental news organization and someone who’s both a professor at the University of Queensland (Australia) and affiliated with the Nature Conservancy making this of the more unusual collaborations I’ve seen.

Moving on to the news, a January 4, 2019 Princeton University news release (also on EurekAlert but published on Jan. 3, 2019) by B. Rose Kelly announces research into monitoring forests,

Recordings of the sounds in tropical forests could unlock secrets about biodiversity and aid conservation efforts around the world, according to a perspective paper published in Science.

Compared to on-the-ground fieldwork, bioacoustics –recording entire soundscapes, including animal and human activity — is relatively inexpensive and produces powerful conservation insights. The result is troves of ecological data in a short amount of time.

Because these enormous datasets require robust computational power, the researchers argue that a global organization should be created to host an acoustic platform that produces on-the-fly analysis. Not only could the data be used for academic research, but it could also monitor conservation policies and strategies employed by companies around the world.

“Nongovernmental organizations and the conservation community need to be able to truly evaluate the effectiveness of conservation interventions. It’s in the interest of certification bodies to harness the developments in bioacoustics for better enforcement and effective measurements,” said Zuzana Burivalova, a postdoctoral research fellow in Professor David Wilcove’s lab at Princeton University’s Woodrow Wilson School of Public and International Affairs.

“Beyond measuring the effectiveness of conservation projects and monitoring compliance with forest protection commitments, networked bioacoustic monitoring systems could also generate a wealth of data for the scientific community,” said co-author Rhett Butler of the environmental news outlet Mongabay.

Burivalova and Butler co-authored the paper with Edward Game, who is based at the Nature Conservancy and the University of Queensland.

The researchers explain that while satellite imagery can be used to measure deforestation, it often fails to detect other subtle ecological degradations like overhunting, fires, or invasion by exotic species. Another common measure of biodiversity is field surveys, but those are often expensive, time consuming and cover limited ground.

Depending on the vegetation of the area and the animals living there, bioacoustics can record animal sounds and songs from several hundred meters away. Devices can be programmed to record at specific times or continuously if there is solar polar or a cellular network signal. They can also record a range of taxonomic groups including birds, mammals, insects, and amphibians. To date, several multiyear recordings have already been completed.

Bioacoustics can help effectively enforce policy efforts as well. Many companies are engaged in zero-deforestation efforts, which means they are legally obligated to produce goods without clearing large forests. Bioacoustics can quickly and cheaply determine how much forest has been left standing.

“Companies are adopting zero deforestation commitments, but these policies do not always translate to protecting biodiversity due to hunting, habitat degradation, and sub-canopy fires. Bioacoustic monitoring could be used to augment satellites and other systems to monitor compliance with these commitments, support real-time action against prohibited activities like illegal logging and poaching, and potentially document habitat and species recovery,” Butler said.

Further, these recordings can be used to measure climate change effects. While the sounds might not be able to assess slow, gradual changes, they could help determine the influence of abrupt, quick differences to land caused by manufacturing or hunting, for example.

Burivalova and Game have worked together previously as you can see in a July 24, 2017 article by Justine E. Hausheer for a nature.org blog ‘Cool Green Science’ (Note: Links have been removed),

Morning in Musiamunat village. Across the river and up a steep mountainside, birds-of-paradise call raucously through the rainforest canopy, adding their calls to the nearly deafening insect chorus. Less than a kilometer away, small birds flit through a grove of banana trees, taro and pumpkin vines winding across the rough clearing. Here too, the cicadas howl.

To the ear, both garden and forest are awash with noise. But hidden within this dawn chorus are clues to the forest’s health.

New acoustic research from Nature Conservancy scientists indicates that forest fragmentation drives distinct changes in the dawn and dusk choruses of forests in Papua New Guinea. And this innovative method can help evaluate the conservation benefits of land-use planning efforts with local communities, reducing the cost of biodiversity monitoring in the rugged tropics.

“It’s one thing for a community to say that they cut fewer trees, or restricted hunting, or set aside a protected area, but it’s very difficult for small groups to demonstrate the effectiveness of those efforts,” says Eddie Game, The Nature Conservancy’s lead scientist for the Asia-Pacific region.

Aside from the ever-present logging and oil palm, another threat to PNG’s forests is subsistence agriculture, which feeds a majority of the population. In the late 1990s, The Nature Conservancy worked with 11 communities in the Adelbert Mountains to create land-use plans, dividing each community’s lands into different zones for hunting, gardening, extracting forest products, village development, and conservation. The goal was to limit degradation to specific areas of the forest, while keeping the rest intact.

But both communities and conservationists needed a way to evaluate their efforts, before the national government considered expanding the program beyond Madang province. So in July 2015, Game and two other scientists, Zuzana Burivalova and Timothy Boucher, spent two weeks gathering data in the Adelbert Mountains, a rugged lowland mountain range in Papua New Guinea’s Madang province.

Working with conservation rangers from Musiamunat, Yavera, and Iwarame communities, the research team tested an innovative method — acoustic sampling — to measure biodiversity across the community forests. Game and his team used small acoustic recorders placed throughout the forest to record 24-hours of sound from locations in each of the different land zones.

Soundscapes from healthy, biodiverse forests are more complex, so the scientists hoped that these recordings would show if parts of the community forests, like the conservation zones, were more biodiverse than others. “Acoustic recordings won’t pick up every species, but we don’t need that level of detail to know if a forest is healthy,” explains Boucher, a conservation geographer with the Conservancy.

Here’s a link to and a citation for the latest work from Burivalova and Game,

The sound of a tropical forest by Zuzana Burivalova, Edward T. Game, Rhett A. Butler. Science 04 Jan 2019: Vol. 363, Issue 6422, pp. 28-29 DOI: 10.1126/science.aav1902

This paper is behind a paywall. You can find out more about Mongabay and Rhett Butler in its Wikipedia entry.

***ETA July 18, 2019: Cara Cannon Byington, Associate Director, Science Communications for the Nature Conservancy emailed to say that a January 3, 2019 posting on the conservancy’s Cool Green Science Blog features audio files from the research published in ‘The sound of a tropical forest. Scroll down about 75% of the way for the audio.***

Whale songs

Whales share songs when they meet and a January 8, 2019 Wildlife Conservation Society news release (also on EurekAlert) describes how that sharing takes place,

Singing humpback whales from different ocean basins seem to be picking up musical ideas from afar, and incorporating these new phrases and themes into the latest song, according to a newly published study in Royal Society Open Science that’s helping scientists better understand how whales learn and change their musical compositions.

The new research shows that two humpback whale populations in different ocean basins (the South Atlantic and Indian Oceans) in the Southern Hemisphere sing similar song types, but the amount of similarity differs across years. This suggests that males from these two populations come into contact at some point in the year to hear and learn songs from each other.

The study titled “Culturally transmitted song exchange between humpback whales (Megaptera novaeangliae) in the southeast Atlantic and southwest Indian Ocean basins” appears in the latest edition of the Royal Society Open Science journal. The authors are: Melinda L. Rekdahl, Carissa D. King, Tim Collins, and Howard Rosenbaum of WCS (Wildlife Conservation Society); Ellen C. Garland of the University of St. Andrews; Gabriella A. Carvajal of WCS and Stony Brook University; and Yvette Razafindrakoto of COSAP [ (Committee for the Management of the Protected Area of Bezà Mahafaly ] and Madagascar National Parks.

“Song sharing between populations tends to happen more in the Northern Hemisphere where there are fewer physical barriers to movement of individuals between populations on the breeding grounds, where they do the majority of their singing. In some populations in the Southern Hemisphere song sharing appears to be more complex, with little song similarity within years but entire songs can spread to neighboring populations leading to song similarity across years,” said Dr. Melinda Rekdahl, marine conservation scientist for WCS’s Ocean Giants Program and lead author of the study. “Our study shows that this is not always the case in Southern Hemisphere populations, with similarities between both ocean basin songs occurring within years to different degrees over a 5-year period.”

The study authors examined humpback whale song recordings from both sides of the African continent–from animals off the coasts of Gabon and Madagascar respectively–and transcribed more than 1,500 individual sounds that were recorded between 2001-2005. Song similarity was quantified using statistical methods.

Male humpback whales are one of the animal kingdom’s most noteworthy singers, and individual animals sing complex compositions consisting of moans, cries, and other vocalizations called “song units.” Song units are composed into larger phrases, which are repeated to form “themes.” Different themes are produced in a sequence to form a song cycle that are then repeated for hours, or even days. For the most part, all males within the same population sing the same song type, and this population-wide song similarity is maintained despite continual evolution or change to the song leading to seasonal “hit songs.” Some song learning can occur between populations that are in close proximity and may be able to hear the other population’s song.

Over time, the researchers detected shared phrases and themes in both populations, with some years exhibiting more similarities than others. In the beginning of the study, whale populations in both locations shared five “themes.” One of the shared themes, however, had differences. Gabon’s version of Theme 1, the researchers found, consisted of a descending “cry-woop”, whereas the Madagascar singers split Theme 1 into two parts: a descending cry followed by a separate woop or “trumpet.”

Other differences soon emerged over time. By 2003, the song sung by whales in Gabon became more elaborate than their counterparts in Madagascar. In 2004, both population song types shared the same themes, with the whales in Gabon’s waters singing three additional themes. Interestingly, both whale groups had dropped the same two themes from the previous year’s song types. By 2005, songs being sung on both sides of Africa were largely similar, with individuals in both locations singing songs with the same themes and order. However, there were exceptions, including one whale that revived two discontinued themes from the previous year.

The study’s results stands in contrast to other research in which a song in one part of an ocean basin replaces or “revolutionizes” another population’s song preference. In this instance, the gradual changes and degrees of similarity shared by humpbacks on both sides of Africa was more gradual and subtle.

“Studies such as this one are an important means of understanding connectivity between different whale populations and how they move between different seascapes,” said Dr. Howard Rosenbaum, Director of WCS’s Ocean Giants Program and one of the co-authors of the new paper. “Insights on how different populations interact with one another and the factors that drive the movements of these animals can lead to more effective plans for conservation.”

The humpback whale is one of the world’s best-studied marine mammal species, well known for its boisterous surface behavior and migrations stretching thousands of miles. The animal grows up to 50 feet in length and has been globally protected from commercial whaling since the 1960s. WCS has studied humpback whales since that time and–as the New York Zoological Society–played a key role in the discovery that humpback whales sing songs. The organization continues to study humpback whale populations around the world and right here in the waters of New York; research efforts on humpback and other whales in New York Bight are currently coordinated through the New York Aquarium’s New York Seascape program.

I’m not able to embed the audio file here but, for the curious, there is a portion of a humpback whale song from Gabon here at EurekAlert.

Here’s a link to and a citation for the research paper,

Culturally transmitted song exchange between humpback whales (Megaptera novaeangliae) in the southeast Atlantic and southwest Indian Ocean basins by Melinda L. Rekdahl, Ellen C. Garland, Gabriella A. Carvajal, Carissa D. King, Tim Collins, Yvette Razafindrakoto and Howard Rosenbaum. Royal Society Open Science 21 November 2018 Volume 5 Issue 11 https://doi.org/10.1098/rsos.172305 Published:28 November 2018

This is an open access paper.

Making nanoscale transistor chips out of thin air—sort of

Caption: The nano-gap transistors operating in air. As gaps become smaller than the mean-free path of electrons in air, there is ballistic electron transport. Credit: RMIT University

A November 19, 2018 news item on Nanowerk describes the ‘airy’ work ( Note: A link has been removed),

Researchers at RMIT University [Ausralia] have engineered a new type of transistor, the building block for all electronics. Instead of sending electrical currents through silicon, these transistors send electrons through narrow air gaps, where they can travel unimpeded as if in space.

The device unveiled in material sciences journal Nano Letters (“Metal–Air Transistors: Semiconductor-free field-emission air-channel nanoelectronics”), eliminates the use of any semiconductor at all, making it faster and less prone to heating up.

A November 19, 2018 RMIT University news release on EurkeAlert, which originated the news item, describes the work and possibilities in more detail,

Lead author and PhD candidate in RMIT’s Functional Materials and Microsystems Research Group, Ms Shruti Nirantar, said this promising proof-of-concept design for nanochips as a combination of metal and air gaps could revolutionise electronics.

“Every computer and phone has millions to billions of electronic transistors made from silicon, but this technology is reaching its physical limits where the silicon atoms get in the way of the current flow, limiting speed and causing heat,” Nirantar said.

“Our air channel transistor technology has the current flowing through air, so there are no collisions to slow it down and no resistance in the material to produce heat.”

The power of computer chips – or number of transistors squeezed onto a silicon chip – has increased on a predictable path for decades, roughly doubling every two years. But this rate of progress, known as Moore’s Law, has slowed in recent years as engineers struggle to make transistor parts, which are already smaller than the tiniest viruses, smaller still.

Nirantar says their research is a promising way forward for nano electronics in response to the limitation of silicon-based electronics.

“This technology simply takes a different pathway to the miniaturisation of a transistor in an effort to uphold Moore’s Law for several more decades,” Shruti said.

Research team leader Associate Professor Sharath Sriram said the design solved a major flaw in traditional solid channel transistors – they are packed with atoms – which meant electrons passing through them collided, slowed down and wasted energy as heat.

“Imagine walking on a densely crowded street in an effort to get from point A to B. The crowd slows your progress and drains your energy,” Sriram said.

“Travelling in a vacuum on the other hand is like an empty highway where you can drive faster with higher energy efficiency.”

But while this concept is obvious, vacuum packaging solutions around transistors to make them faster would also make them much bigger, so are not viable.

“We address this by creating a nanoscale gap between two metal points. The gap is only a few tens of nanometers, or 50,000 times smaller than the width of a human hair, but it’s enough to fool electrons into thinking that they are travelling through a vacuum and re-create a virtual outer-space for electrons within the nanoscale air gap,” he said.

The nanoscale device is designed to be compatible with modern industry fabrication and development processes. It also has applications in space – both as electronics resistant to radiation and to use electron emission for steering and positioning ‘nano-satellites’.

“This is a step towards an exciting technology which aims to create something out of nothing to significantly increase speed of electronics and maintain pace of rapid technological progress,” Sriram said.

Here’s a link to and a citation for the paper,

Metal–Air Transistors: Semiconductor-free field-emission air-channel nanoelectronics by
Shruti Nirantar, Taimur Ahmed, Guanghui Ren, Philipp Gutruf, Chenglong Xu, Madhu Bhaskaran, Sumeet Walia, and Sharath Sriram. Nano Lett., DOI: 10.1021/acs.nanolett.8b02849 Publication Date (Web): November 16, 2018

Copyright © 2018 American Chemical Society

This paper is behind a paywall.

Real-time tracking of UV (ultraviolet light) exposure for all skin types (light to dark)

It’s nice to find this research after my August 21, 2018 posting where I highlighted (scroll down to ‘Final comments’) the issues around databases and skin cancer data which is usually derived from fair-skinned people while people with darker hues tend not to be included. This is partly due to the fact that fair-skinned people have a higher risk and also partly due to myths about how more melanin in your skin somehow protects you from skin cancer.

This October 4, 2018 news item on ScienceDaily announces research into a way to track UV exposure for all skin types,

Researchers from the University of Granada [Spain] and RMIT University in Melbourne [Australia] have developed personalised and low-cost wearable ultraviolet (UV) sensors that warn users when their exposure to the sun has become dangerous.

The paper-based sensor, which can be worn as a wristband, features happy and sad emoticon faces — drawn in an invisible UV-sensitive ink — that successively light up as you reach 25%, 50%, 75% and finally 100% of your daily recommended UV exposure.

The research team have also created six versions of the colour-changing wristbands, each of which is personalised for a specific skin tone  [emphasis mine]– an important characteristic given that darker people need more sun exposure to produce vitamin D, which is essential for healthy bones, teeth and muscles.

An October 2, 2018 University of Granada press release (also on EurekAlert) delves further,

Four of the wristbands, each of which indicates a different stage of exposure to UV radiation (25%, 50%, 75% and 100%)

The emoticon faces on the wristband successively “light up” as exposure to UV radiation increases

Skin cancer, one of the most common types of cancer throughout the world, is primarily caused by overexposure to ultraviolet radiation (UVR). In Spain, over 74,000 people are diagnosed with non-melanoma skin cancer every year, while a further 4,000 are diagnosed with melanoma skin cancer. In regions such as Australia, where the ozone layer has been substantially depleted, it is estimated that approximately 2 in 3 people will be diagnosed with skin cancer by the time they reach the age of 70.

“UVB and UVC radiation is retained by the ozone layer. This sensor is especially important in the current context, given that the hole in the ozone layer is exposing us to such dangerous radiation”, explains José Manuel Domínguez Vera, a researcher at the University of Granada’s Department of Inorganic Chemistry and the main author of the paper.

Domínguez Vera also highlights that other sensors currently available on the market only measure overall UV radiation, without distinguishing between UVA, UVB and UVC, each of which has a significantly different impact on human health.  In contrast, the new paper-based sensor can differentiate between UVA, UVB and UVC radiation. Prolonged exposure to UVA radiation is associated with skin ageing and wrinkling, while excessive exposure to UVB causes sunburn and increases the likelihood of skin cancer and eye damage.

Drawbacks of the traditional UV index

Ultraviolet radiation is determined by aspects such as location, time of day, pollution levels, astronomical factors, weather conditions such as clouds, and can be heightened by reflective surfaces like bodies of water, sand and snow. But UV rays are not visible to the human eye (even if it is cloudy UV radiation can be high) and until now the only way of monitoring UV intensity has been to use the UV index, which is standardly given in weather reports and indicates 5 degrees of radiation;  low, moderate, high, very high or extreme.

Despite its usefulness, the UV index is a relatively limited tool. For instance, it does not clearly indicate what time of the day or for how long you should be outside to get your essential vitamin D dose, or when to cover up to avoid sunburn and a heightened risk of skin cancer.

Moreover, the UV index is normally based on calculations for fair skin, making it unsuitable for ethnically diverse populations.  While individuals with fairer skin are more susceptible to UV damage, those with darker skin require much longer periods in the sun in order to absorb healthy amounts of vitamin D. In this regard, the UV index is not an accurate tool for gauging and monitoring an individual’s recommended daily exposure.

UV-sensitive ink

The research team set out to tackle the drawbacks of the traditional UV index by developing an inexpensive, disposable and personalised sensor that allows the wearer to track their UV exposure in real-time. The sensor paper they created features a special ink, containing phosphomolybdic acid (PMA), which turns from colourless to blue when exposed to UV radiation. They can use the initially-invisible ink to draw faces—or any other design—on paper and other surfaces. Depending on the type and intensity of the UV radiation to which the ink is exposed, the paper begins to turn blue; the greater the exposure to UV radiation, the faster the paper turns blue.

Additionally, by tweaking the ink composition and the sensor design, the team were able to make the ink change colour faster or slower, allowing them to produce different sensors that are tailored to the six different types of skin colour. [emphasis mine]

Applications beyond health

This low-cost, paper-based sensor technology will not only help people of all colours to strike an optimum balance between absorbing enough vitamin D and avoiding sun damage — it also has significant applications for the agricultural and industrial sectors. UV rays affect the growth of crops and the shelf life of a range of consumer products. As the UV sensors can detect even the slightest doses of UV radiation, as well as the most extreme, this new technology could have vast potential for industries and companies seeking to evaluate the prolonged impact of UV exposure on products that are cultivated or kept outdoors.

The research project is the result of fruitful collaborations between two members of the UGR BIONanoMet (FQM368) research group; Ana González and José Manuel Domínguez-Vera, and the research group led by Dr. Vipul Bansal at RMIT University in Melbourne (Australia).

Here’s a link to and a citation for the paper,

Skin color-specific and spectrally-selective naked-eye dosimetry of UVA, B and C radiations by Wenyue Zou, Ana González, Deshetti Jampaiah, Rajesh Ramanathan, Mohammad Taha, Sumeet Walia, Sharath Sriram, Madhu Bhaskaran, José M. Dominguez-Vera, & Vipul Bansal. Nature Communicationsvolume 9, Article number: 3743 (2018) DOI: https://doi.org/10.1038/s41467-018-06273-3 Published 25 September 2018

This paper is open access.