Tag Archives: Australia

Making nanoscale transistor chips out of thin air—sort of

Caption: The nano-gap transistors operating in air. As gaps become smaller than the mean-free path of electrons in air, there is ballistic electron transport. Credit: RMIT University

A November 19, 2018 news item on Nanowerk describes the ‘airy’ work ( Note: A link has been removed),

Researchers at RMIT University [Ausralia] have engineered a new type of transistor, the building block for all electronics. Instead of sending electrical currents through silicon, these transistors send electrons through narrow air gaps, where they can travel unimpeded as if in space.

The device unveiled in material sciences journal Nano Letters (“Metal–Air Transistors: Semiconductor-free field-emission air-channel nanoelectronics”), eliminates the use of any semiconductor at all, making it faster and less prone to heating up.

A November 19, 2018 RMIT University news release on EurkeAlert, which originated the news item, describes the work and possibilities in more detail,

Lead author and PhD candidate in RMIT’s Functional Materials and Microsystems Research Group, Ms Shruti Nirantar, said this promising proof-of-concept design for nanochips as a combination of metal and air gaps could revolutionise electronics.

“Every computer and phone has millions to billions of electronic transistors made from silicon, but this technology is reaching its physical limits where the silicon atoms get in the way of the current flow, limiting speed and causing heat,” Nirantar said.

“Our air channel transistor technology has the current flowing through air, so there are no collisions to slow it down and no resistance in the material to produce heat.”

The power of computer chips – or number of transistors squeezed onto a silicon chip – has increased on a predictable path for decades, roughly doubling every two years. But this rate of progress, known as Moore’s Law, has slowed in recent years as engineers struggle to make transistor parts, which are already smaller than the tiniest viruses, smaller still.

Nirantar says their research is a promising way forward for nano electronics in response to the limitation of silicon-based electronics.

“This technology simply takes a different pathway to the miniaturisation of a transistor in an effort to uphold Moore’s Law for several more decades,” Shruti said.

Research team leader Associate Professor Sharath Sriram said the design solved a major flaw in traditional solid channel transistors – they are packed with atoms – which meant electrons passing through them collided, slowed down and wasted energy as heat.

“Imagine walking on a densely crowded street in an effort to get from point A to B. The crowd slows your progress and drains your energy,” Sriram said.

“Travelling in a vacuum on the other hand is like an empty highway where you can drive faster with higher energy efficiency.”

But while this concept is obvious, vacuum packaging solutions around transistors to make them faster would also make them much bigger, so are not viable.

“We address this by creating a nanoscale gap between two metal points. The gap is only a few tens of nanometers, or 50,000 times smaller than the width of a human hair, but it’s enough to fool electrons into thinking that they are travelling through a vacuum and re-create a virtual outer-space for electrons within the nanoscale air gap,” he said.

The nanoscale device is designed to be compatible with modern industry fabrication and development processes. It also has applications in space – both as electronics resistant to radiation and to use electron emission for steering and positioning ‘nano-satellites’.

“This is a step towards an exciting technology which aims to create something out of nothing to significantly increase speed of electronics and maintain pace of rapid technological progress,” Sriram said.

Here’s a link to and a citation for the paper,

Metal–Air Transistors: Semiconductor-free field-emission air-channel nanoelectronics by
Shruti Nirantar, Taimur Ahmed, Guanghui Ren, Philipp Gutruf, Chenglong Xu, Madhu Bhaskaran, Sumeet Walia, and Sharath Sriram. Nano Lett., DOI: 10.1021/acs.nanolett.8b02849 Publication Date (Web): November 16, 2018

Copyright © 2018 American Chemical Society

This paper is behind a paywall.

Real-time tracking of UV (ultraviolet light) exposure for all skin types (light to dark)

It’s nice to find this research after my August 21, 2018 posting where I highlighted (scroll down to ‘Final comments’) the issues around databases and skin cancer data which is usually derived from fair-skinned people while people with darker hues tend not to be included. This is partly due to the fact that fair-skinned people have a higher risk and also partly due to myths about how more melanin in your skin somehow protects you from skin cancer.

This October 4, 2018 news item on ScienceDaily announces research into a way to track UV exposure for all skin types,

Researchers from the University of Granada [Spain] and RMIT University in Melbourne [Australia] have developed personalised and low-cost wearable ultraviolet (UV) sensors that warn users when their exposure to the sun has become dangerous.

The paper-based sensor, which can be worn as a wristband, features happy and sad emoticon faces — drawn in an invisible UV-sensitive ink — that successively light up as you reach 25%, 50%, 75% and finally 100% of your daily recommended UV exposure.

The research team have also created six versions of the colour-changing wristbands, each of which is personalised for a specific skin tone  [emphasis mine]– an important characteristic given that darker people need more sun exposure to produce vitamin D, which is essential for healthy bones, teeth and muscles.

An October 2, 2018 University of Granada press release (also on EurekAlert) delves further,

Four of the wristbands, each of which indicates a different stage of exposure to UV radiation (25%, 50%, 75% and 100%)

The emoticon faces on the wristband successively “light up” as exposure to UV radiation increases

Skin cancer, one of the most common types of cancer throughout the world, is primarily caused by overexposure to ultraviolet radiation (UVR). In Spain, over 74,000 people are diagnosed with non-melanoma skin cancer every year, while a further 4,000 are diagnosed with melanoma skin cancer. In regions such as Australia, where the ozone layer has been substantially depleted, it is estimated that approximately 2 in 3 people will be diagnosed with skin cancer by the time they reach the age of 70.

“UVB and UVC radiation is retained by the ozone layer. This sensor is especially important in the current context, given that the hole in the ozone layer is exposing us to such dangerous radiation”, explains José Manuel Domínguez Vera, a researcher at the University of Granada’s Department of Inorganic Chemistry and the main author of the paper.

Domínguez Vera also highlights that other sensors currently available on the market only measure overall UV radiation, without distinguishing between UVA, UVB and UVC, each of which has a significantly different impact on human health.  In contrast, the new paper-based sensor can differentiate between UVA, UVB and UVC radiation. Prolonged exposure to UVA radiation is associated with skin ageing and wrinkling, while excessive exposure to UVB causes sunburn and increases the likelihood of skin cancer and eye damage.

Drawbacks of the traditional UV index

Ultraviolet radiation is determined by aspects such as location, time of day, pollution levels, astronomical factors, weather conditions such as clouds, and can be heightened by reflective surfaces like bodies of water, sand and snow. But UV rays are not visible to the human eye (even if it is cloudy UV radiation can be high) and until now the only way of monitoring UV intensity has been to use the UV index, which is standardly given in weather reports and indicates 5 degrees of radiation;  low, moderate, high, very high or extreme.

Despite its usefulness, the UV index is a relatively limited tool. For instance, it does not clearly indicate what time of the day or for how long you should be outside to get your essential vitamin D dose, or when to cover up to avoid sunburn and a heightened risk of skin cancer.

Moreover, the UV index is normally based on calculations for fair skin, making it unsuitable for ethnically diverse populations.  While individuals with fairer skin are more susceptible to UV damage, those with darker skin require much longer periods in the sun in order to absorb healthy amounts of vitamin D. In this regard, the UV index is not an accurate tool for gauging and monitoring an individual’s recommended daily exposure.

UV-sensitive ink

The research team set out to tackle the drawbacks of the traditional UV index by developing an inexpensive, disposable and personalised sensor that allows the wearer to track their UV exposure in real-time. The sensor paper they created features a special ink, containing phosphomolybdic acid (PMA), which turns from colourless to blue when exposed to UV radiation. They can use the initially-invisible ink to draw faces—or any other design—on paper and other surfaces. Depending on the type and intensity of the UV radiation to which the ink is exposed, the paper begins to turn blue; the greater the exposure to UV radiation, the faster the paper turns blue.

Additionally, by tweaking the ink composition and the sensor design, the team were able to make the ink change colour faster or slower, allowing them to produce different sensors that are tailored to the six different types of skin colour. [emphasis mine]

Applications beyond health

This low-cost, paper-based sensor technology will not only help people of all colours to strike an optimum balance between absorbing enough vitamin D and avoiding sun damage — it also has significant applications for the agricultural and industrial sectors. UV rays affect the growth of crops and the shelf life of a range of consumer products. As the UV sensors can detect even the slightest doses of UV radiation, as well as the most extreme, this new technology could have vast potential for industries and companies seeking to evaluate the prolonged impact of UV exposure on products that are cultivated or kept outdoors.

The research project is the result of fruitful collaborations between two members of the UGR BIONanoMet (FQM368) research group; Ana González and José Manuel Domínguez-Vera, and the research group led by Dr. Vipul Bansal at RMIT University in Melbourne (Australia).

Here’s a link to and a citation for the paper,

Skin color-specific and spectrally-selective naked-eye dosimetry of UVA, B and C radiations by Wenyue Zou, Ana González, Deshetti Jampaiah, Rajesh Ramanathan, Mohammad Taha, Sumeet Walia, Sharath Sriram, Madhu Bhaskaran, José M. Dominguez-Vera, & Vipul Bansal. Nature Communicationsvolume 9, Article number: 3743 (2018) DOI: https://doi.org/10.1038/s41467-018-06273-3 Published 25 September 2018

This paper is open access.

Students! Need help with your memory? Try Sans Forgetica

Sans forgetica is a new, scientifically and aesthetically designed font to help students remember what they read.

An October 4, 2018 news article by Mark Wycislik-Wilson for Beta News announces the new font,

Researchers from Australia’s RMIT University have created a font which they say could help you to retain more data.

Sans Forgetica is the result of work involving typographic design specialists and psychologists, and it has been designed specifically to make it easier to remember written information. The font has purposefully been made slightly difficult to read, using a reverse slant and gaps in letters to exploit the “desirable difficulty” as a memory aid.

An October 3, 2018 RMIT University press release, which originated the news item, provides more details,

Sans Forgetica could help people remember more of what they read.

Researchers and academics from different disciplines came together to develop, design and test the font called Sans Forgetica.

The font is the world’s first typeface specifically designed to help people retain more information and remember more of typed study notes and it’s available for free.

It was developed in a collaboration between typographic design specialist and psychologists, combining psychological theory and design principles to improve retention of written information.

Stephen Banham, RMIT lecturer in typography and industry leader, said it was great working on a project that combined research from typography and psychology and the experts from RMIT’s Behavioural Business Lab.

“This cross pollination of thinking has led to the creation of a new font that is fundamentally different from all other font. It is also a clear application of theory into practice, something we strive for at RMIT,” he said.

Chair of the RMIT Behavioural Business Lab and behavioural economist, Dr Jo Peryman, said it was a terrific tool for students studying for exams.

“We believe this is the first time that specific principles of design theory have been combined with specific principles of psychology theory in order to create a font.”

Stephen Banham, RMIT lecturer in typography and industry leader, was part of the Sans Forgetica team.

The font was developed using a learning principle called ‘desirable difficulty’, where an obstruction is added to the learning process that requires us to put in just enough effort, leading to better memory retention to promote deeper cognitive processing.

Senior Marketing Lecturer (Experimental Methods and Design Thinking) and founding member of the RMIT Behavioural Business Lab Dr Janneke Blijlevens said typical fonts were very familiar.

“Readers often glance over them and no memory trace is created,” Blijlevens said.

However, if a font is too different, the brain can’t process it and the information is not retained.

“Sans Forgetica lies at a sweet spot where just enough obstruction has been added to create that memory retention.”

Sans Forgetica has varying degrees of ‘distinctiveness’ built in that subvert many of the design principles normally associated with conventional typography.

These degrees of distinctiveness cause readers to dwell longer on each word, giving the brain more time to engage in deeper cognitive processing, to enhance information retention.

Roughly 400 Australian university students participated in a laboratory and an online experiment conducted by RMIT, where fonts with a range of obstructions were tested to determine which led to the best memory retention. Sans Forgetica broke just enough design principles without becoming too illegible and aided memory retention.

Dr Jo Peryman and Dr Janneke Blijlevens from the RMIT Behavioural Business Lab provided psychological theory and insights to help inform the development, design and testing of Sans Forgetica.

RMIT worked with strategy and creative agency Naked Communications to create the Sans Forgetica concept and font.

Sans Forgetica is available free to download as a font and Chrome browser extension at sansforgetica.rmit.

Thank you Australian typographic designers and psychologists!

Popcorn-powered robots

A soft robotic device powered by popcorn, constructed by researchers in Cornell’s Collective Embodied Intelligence Lab. Courtesy: Cornell University

What an intriguing idea, popcorn-powered robots, and one I have difficulty imagining even with the help of the image above. A July 26, 2018 Cornell University news release (an edited version is on EurekAlert) by Melanie Lefkowitz describes the concept,

Cornell researchers have discovered how to power simple robots with a novel substance that, when heated, can expand more than 10 times in size, change its viscosity by a factor of 10 and transition from regular to highly irregular granules with surprising force.

You can also eat it with a little butter and salt.

“Popcorn-Driven Robotic Actuators,” a recent paper co-authored by doctoral student Steven Ceron, mechanical engineering, and Kirstin H. Petersen, assistant professor of electrical and computer engineering, examines how popcorn’s unique qualities can power inexpensive robotic devices that grip, expand or change rigidity.

“The goal of our lab is to try to make very minimalistic robots which, when deployed in high numbers, can still accomplish great things,” said Petersen, who runs Cornell’s Collective Embodied Intelligence Lab. “Simple robots are cheap and less prone to failures and wear, so we can have many operating autonomously over a long time. So we are always looking for new and innovative ideas that will permit us to have more functionalities for less, and popcorn is one of those.”

The study is the first to consider powering robots with popcorn, which is inexpensive, readily available, biodegradable and of course, edible. Since kernels can expand rapidly, exerting force and motion when heated, they could potentially power miniature jumping robots. Edible devices could be ingested for medical procedures. The mix of hard, unpopped granules and lighter popped corn could replace fluids in soft robots without the need for air pumps or compressors.

“Pumps and compressors tend to be more expensive, and they add a lot of weight and expense to your robot,” said Ceron, the paper’s lead author. “With popcorn, in some of the demonstrations that we showed, you just need to apply voltage to get the kernels to pop, so it would take all the bulky and expensive parts out of the robots.”

Since kernels can’t shrink once they’ve popped, a popcorn-powered mechanism can generally be used only once, though multiple uses are conceivable because popped kernels can dissolve in water, Ceron said.

The researchers experimented with Amish Country Extra Small popcorn, which they chose because the brand did not use additives. The extra-small variety had the highest expansion ratio of those they tested.

After studying popcorn’s properties using different types of heating, the researchers constructed three simple robotic actuators – devices used to perform a function.

For a jamming actuator, 36 kernels of popcorn heated with nichrome wire were used to stiffen a flexible silicone beam. For an elastomer actuator, they constructed a three-fingered soft gripper, whose silicone fingers were stuffed with popcorn heated by nichrome wire. When the kernels popped, the expansion exerted pressure against the outer walls of the fingers, causing them to curl. For an origami actuator, they folded recycled Newman’s Own organic popcorn bags into origami bellows folds, filled them with kernels and microwaved them. The expansion of the kernels was strong enough to support the weight of a nine-pound kettlebell.

The paper was presented at the IEEE [Institute of Electrical and Electronics Engineers] International Conference on Robotics and Automation in May and co-authored with Aleena Kurumunda ’19, Eashan Garg ’20, Mira Kim ’20 and Tosin Yeku ’20. Petersen said she hopes it inspires researchers to explore the possibilities of other nontraditional materials.

“Robotics is really good at embracing new ideas, and we can be super creative about what we use to generate multifunctional properties,” she said. “In the end we come up with very simple solutions to fairly complex problems. We don’t always have to look for high-tech solutions. Sometimes the answer is right in front of us.”

The work was supported by the Cornell Engineering Learning Initiative, the Cornell Electrical and Computer Engineering Early Career Award and the Cornell Sloan Fellowship.

Here’s a link to and a citation for the paper,

Popcorn-Driven Robotic Actuators by Steven Ceron, Aleena Kurumunda, Eashan Garg, Mira Kim, Tosin Yeku, and Kirstin Petersen. Presented at the IEEE International Conference on Robotics and Automation held in May 21-25, 2018 in Brisbane, Australia.

The researchers have made this video demonstrating the technology,

Australian scientists say that sunscreens with zinc oxide nanoparticles aren’t toxic to you

The Australians have had quite the struggle over whether or not to use nanotechnology-enabled sunscreens (see my Feb. 9, 2012 posting about an Australian nanosunscreen debacle and I believe the reverberations continue even ’til today). This latest research will hopefully help calm the waters. From a Dec. 4, 2018 news item on ScienceDaily,

Zinc oxide (ZnO) has long been recognized as an effective sunscreen agent. However, there have been calls for sunscreens containing ZnO nanoparticles to be banned because of potential toxicity and the need for caution in the absence of safety data in humans. An important new study provides the first direct evidence that intact ZnO nanoparticles neither penetrate the human skin barrier nor cause cellular toxicity after repeated application to human volunteers under in-use conditions. This confirms that the known benefits of using ZnO nanoparticles in sunscreens clearly outweigh the perceived risks, reports the Journal of Investigative Dermatology.

A December 4, 2018 Elsevier (Publishing) press release (also on EurekAlert), which originated the news item, provides international context for the safety discussion while providing more details about this latest research,

The safety of nanoparticles used in sunscreens has been a highly controversial international issue in recent years, as previous animal exposure studies found much higher skin absorption of zinc from application of ZnO sunscreens to the skin than in human studies. Some public advocacy groups have voiced concern that penetration of the upper layer of the skin by sunscreens containing ZnO nanoparticles could gain access to the living cells in the viable epidermis with toxic consequences, including DNA damage. A potential danger, therefore, is that this concern may also result in an undesirable downturn in sunscreen use. A 2017 National Sun Protection Survey by the Cancer Council Australia found only 55 percent of Australians believed it was safe to use sunscreen every day, down from 61 per cent in 2014.

Investigators in Australia studied the safety of repeated application of agglomerated ZnO nanoparticles applied to five human volunteers (aged 20 to 30 years) over five days. This mimics normal product use by consumers. They applied ZnO nanoparticles suspended in a commercial sunscreen base to the skin of volunteers hourly for six hours and daily for five days. Using multiphoton tomography with fluorescence lifetime imaging microscopy, they showed that the nanoparticles remained within the superficial layers of the stratum corneum and in the skin furrows. The fate of ZnO nanoparticles was also characterized in excised human skin in vitro. They did not penetrate the viable epidermis and no cellular toxicity was seen, even after repeated hourly or daily applications typically used for sunscreens.

“The terrible consequences of skin cancer and photoaging are much greater than any toxicity risk posed by approved sunscreens,” stated lead investigator Michael S. Roberts, PhD, of the Therapeutics Research Centre, The University of Queensland Diamantina Institute, Translational Research Institute, Brisbane, and School of Pharmacy and Medical Sciences, University of South Australia, Sansom Institute, Adelaide, QLD, Australia.

“This study has shown that sunscreens containing nano ZnO can be repeatedly applied to the skin with minimal risk of any toxicity. We hope that these findings will help improve consumer confidence in these products, and in turn lead to better sun protection and reduction in ultraviolet-induced skin aging and cancer cases,” he concluded.

“This study reinforces the important public health message that the known benefits of using ZnO nano sunscreens clearly outweigh the perceived risks of using nano sunscreens that are not supported by the scientific evidence,” commented Paul F.A. Wright, PhD, School of Health and Biomedical Sciences, RMIT University, Bundoora, VIC, Australia, in an accompanying editorial. “Of great significance is the investigators’ finding that the slight increase in zinc ion concentrations in viable epidermis was not associated with cellular toxicity under conditions of realistic ZnO nano sunscreen use.

A November 21, 2018 University of South Australia press release (also on EurekAlert) provides some additional insight into the Australian situation,, Note: Links have been removed,

It’s safe to slap on the sunscreen this summer – in repeated doses – despite what you have read about the potential toxicity of sunscreens.

A new study led by the University of Queensland (UQ) and University of South Australia (UniSA) provides the first direct evidence that zinc oxide nanoparticles used in sunscreen neither penetrate the skin nor cause cellular toxicity after repeated applications.

The research, published this week in the Journal of Investigative Dermatology, refutes widespread claims among some public advocacy groups – and a growing belief among consumers – about the safety of nanoparticulate-based sunscreens.

UQ and UniSA lead investigator, Professor Michael Roberts, says the myth about sunscreen toxicity took hold after previous animal studies found much higher skin absorption of zinc-containing sunscreens than in human studies.

“There were concerns that these zinc oxide nanoparticles could be absorbed into the epidermis, with toxic consequences, including DNA damage,” Professor Roberts says.

The toxicity link was picked up by consumers, sparking fears that Australians could reduce their sunscreen use, echoed by a Cancer Council 2017 National Sun Protection Survey showing a drop in the number of people who believed it was safe to use sunscreens every day.

Professor Roberts and his co-researchers in Brisbane, Adelaide, Perth and Germany studied the safety of repeated applications of zinc oxide nanoparticles applied to five volunteers aged 20-30 years.

Volunteers applied the ZnO nanoparticles every hour for six hours on five consecutive days.

“Using superior imaging methods, we established that the nanoparticles remained within the superficial layers of the skin and did not cause any cellular damage,” Professor Roberts says.

“We hope that these findings help improve consumer confidence in these products and in turn lead to better sun protection. The terrible consequences of skin cancer and skin damage caused by prolonged sun exposure are much greater than any toxicity posed by approved sunscreens.”

Here’s a link to and a citation for the paper,

Support for the Safe Use of Zinc Oxide Nanoparticle Sunscreens: Lack of Skin Penetration or Cellular Toxicity after Repeated Application in Volunteers by Yousuf H. Mohammed, Amy Holmes, Isha N. Haridass, Washington Y. Sanchez, Hauke Studier, Jeffrey E. Grice, Heather A.E. Benson, Michael S. Roberts. Jurnal of Investigative Dermatology. DOI: https://doi.org/10.1016/j.jid.2018.08.024 Article in Press Published online (Dec. 4, 2018?)

As of Dec. 11, 2018, this article is open access.

The roles mathematics and light play in cellular communication

These are two entirely different types of research but taken together they help build a picture about how the cells in our bodies function.

Cells and light

An April 30, 2018 news item on phys.org describes work on controlling biology with light,

Over the past five years, University of Chicago chemist Bozhi Tian has been figuring out how to control biology with light.

A longterm science goal is devices to serve as the interface between researcher and body—both as a way to understand how cells talk among each other and within themselves, and eventually, as a treatment for brain or nervous system disorders [emphasis mine] by stimulating nerves to fire or limbs to move. Silicon—a versatile, biocompatible material used in both solar panels and surgical implants—is a natural choice.

In a paper published April 30 in Nature Biomedical Engineering, Tian’s team laid out a system of design principles for working with silicon to control biology at three levels—from individual organelles inside cells to tissues to entire limbs. The group has demonstrated each in cells or mice models, including the first time anyone has used light to control behavior without genetic modification.

“We want this to serve as a map, where you can decide which problem you would like to study and immediately find the right material and method to address it,” said Tian, an assistant professor in the Department of Chemistry.

Researchers built this thin layer of silicon lace to modulate neural signals when activated by light. Courtesy of Yuanwen Jiang and Bozhi Tian

An April 30, 2018 University of Chicago news release by Louise Lerner, which originated the news item, describes the work in greater detail,

The scientists’ map lays out best methods to craft silicon devices depending on both the intended task and the scale—ranging from inside a cell to a whole animal.

For example, to affect individual brain cells, silicon can be crafted to respond to light by emitting a tiny ionic current, which encourages neurons to fire. But in order to stimulate limbs, scientists need a system whose signals can travel farther and are stronger—such as a gold-coated silicon material in which light triggers a chemical reaction.

The mechanical properties of the implant are important, too. Say researchers would like to work with a larger piece of the brain, like the cortex, to control motor movement. The brain is a soft, squishy substance, so they’ll need a material that’s similarly soft and flexible, but can bind tightly against the surface. They’d want thin and lacy silicon, say the design principles.

The team favors this method because it doesn’t require genetic modification or a power supply wired in, since the silicon can be fashioned into what are essentially tiny solar panels. (Many other forms of monitoring or interacting with the brain need to have a power supply, and keeping a wire running into a patient is an infection risk.)

They tested the concept in mice and found they could stimulate limb movements by shining light on brain implants. Previous research tested the concept in neurons.

“We don’t have answers to a number of intrinsic questions about biology, such as whether individual mitochondria communicate remotely through bioelectric signals,” said Yuanwen Jiang, the first author on the paper, then a graduate student at UChicago and now a postdoctoral researcher at Stanford. “This set of tools could address such questions as well as pointing the way to potential solutions for nervous system disorders.”

Other UChicago authors were Assoc. Profs. Chin-Tu Chen and Chien-Min Kao, Asst. Prof Xiaoyang, postdoctoral researchers Jaeseok Yi, Yin Fang, Xiang Gao, Jiping Yue, Hsiu-Ming Tsai, Bing Liu and Yin Fang, graduate students Kelliann Koehler, Vishnu Nair, and Edward Sudzilovsky, and undergraduate student George Freyermuth.

Other researchers on the paper hailed from Northwestern University, the University of Illinois at Chicago and Hong Kong Polytechnic University.

The researchers have also made this video illustrating their work,

via Gfycat Tiny silicon nanowires (in blue), activated by light, trigger activity in neurons. (Courtesy Yuanwen Jiang and Bozhi Tian)

Here’s a link to and a citation for the paper,

Rational design of silicon structures for optically controlled multiscale biointerfaces by Yuanwen Jiang, Xiaojian Li, Bing Liu, Jaeseok Yi, Yin Fang, Fengyuan Shi, Xiang Gao, Edward Sudzilovsky, Ramya Parameswaran, Kelliann Koehler, Vishnu Nair, Jiping Yue, KuangHua Guo, Yin Fang, Hsiu-Ming Tsai, George Freyermuth, Raymond C. S. Wong, Chien-Min Kao, Chin-Tu Chen, Alan W. Nicholls, Xiaoyang Wu, Gordon M. G. Shepherd, & Bozhi Tian. Nature Biomedical Engineering (2018) doi:10.1038/s41551-018-0230-1 Published: 30 April 2018

This paper is behind a paywall.

Mathematics and how living cells ‘think’

This May 2, 2018 Queensland University of Technology (QUT; Australia) press release is also on EurekAlert,

How does the ‘brain’ of a living cell work, allowing an organism to function and thrive in changing and unfavourable environments?

Queensland University of Technology (QUT) researcher Dr Robyn Araujo has developed new mathematics to solve a longstanding mystery of how the incredibly complex biological networks within cells can adapt and reset themselves after exposure to a new stimulus.

Her findings, published in Nature Communications, provide a new level of understanding of cellular communication and cellular ‘cognition’, and have potential application in a variety of areas, including new targeted cancer therapies and drug resistance.

Dr Araujo, a lecturer in applied and computational mathematics in QUT’s Science and Engineering Faculty, said that while we know a great deal about gene sequences, we have had extremely limited insight into how the proteins encoded by these genes work together as an integrated network – until now.

“Proteins form unfathomably complex networks of chemical reactions that allow cells to communicate and to ‘think’ – essentially giving the cell a ‘cognitive’ ability, or a ‘brain’,” she said. “It has been a longstanding mystery in science how this cellular ‘brain’ works.

“We could never hope to measure the full complexity of cellular networks – the networks are simply too large and interconnected and their component proteins are too variable.

“But mathematics provides a tool that allows us to explore how these networks might be constructed in order to perform as they do.

“My research is giving us a new way to look at unravelling network complexity in nature.”

Dr Araujo’s work has focused on the widely observed function called perfect adaptation – the ability of a network to reset itself after it has been exposed to a new stimulus.

“An example of perfect adaptation is our sense of smell,” she said. “When exposed to an odour we will smell it initially but after a while it seems to us that the odour has disappeared, even though the chemical, the stimulus, is still present.

“Our sense of smell has exhibited perfect adaptation. This process allows it to remain sensitive to further changes in our environment so that we can detect both very feint and very strong odours.

“This kind of adaptation is essentially what takes place inside living cells all the time. Cells are exposed to signals – hormones, growth factors, and other chemicals – and their proteins will tend to react and respond initially, but then settle down to pre-stimulus levels of activity even though the stimulus is still there.

“I studied all the possible ways a network can be constructed and found that to be capable of this perfect adaptation in a robust way, a network has to satisfy an extremely rigid set of mathematical principles. There are a surprisingly limited number of ways a network could be constructed to perform perfect adaptation.

“Essentially we are now discovering the needles in the haystack in terms of the network constructions that can actually exist in nature.

“It is early days, but this opens the door to being able to modify cell networks with drugs and do it in a more robust and rigorous way. Cancer therapy is a potential area of application, and insights into how proteins work at a cellular level is key.”

Dr Araujo said the published study was the result of more than “five years of relentless effort to solve this incredibly deep mathematical problem”. She began research in this field while at George Mason University in Virginia in the US.

Her mentor at the university’s College of Science and co-author of the Nature Communications paper, Professor Lance Liotta, said the “amazing and surprising” outcome of Dr Araujo’s study is applicable to any living organism or biochemical network of any size.

“The study is a wonderful example of how mathematics can have a profound impact on society and Dr Araujo’s results will provide a set of completely fresh approaches for scientists in a variety of fields,” he said.

“For example, in strategies to overcome cancer drug resistance – why do tumours frequently adapt and grow back after treatment?

“It could also help understanding of how our hormone system, our immune defences, perfectly adapt to frequent challenges and keep us well, and it has future implications for creating new hypotheses about drug addiction and brain neuron signalling adaptation.”

Hre’s a link to and a citation for the paper,

The topological requirements for robust perfect adaptation in networks of any size by Robyn P. Araujo & Lance A. Liotta. Nature Communicationsvolume 9, Article number: 1757 (2018) doi:10.1038/s41467-018-04151-6 Published: 01 May 2018

This paper is open access.

Quantum entanglement in near-macroscopic objects

Researchers at Finland’s Aalto University seem excited in an April 25, 2018 news item on phys.org,

Perhaps the strangest prediction of quantum theory is entanglement, a phenomenon whereby two distant objects become intertwined in a manner that defies both classical physics and a common-sense understanding of reality. In 1935, Albert Einstein expressed his concern over this concept, referring to it as “spooky action at a distance.”

Today, entanglement is considered a cornerstone of quantum mechanics, and it is the key resource for a host of potentially transformative quantum technologies. Entanglement is, however, extremely fragile, and it has previously been observed only in microscopic systems such as light or atoms, and recently in superconducting electric circuits.

In work recently published in Nature, a team led by Prof. Mika Sillanpää at Aalto University in Finland has shown that entanglement of massive objects can be generated and detected.

The researchers managed to bring the motions of two individual vibrating drumheads—fabricated from metallic aluminium on a silicon chip—into an entangled quantum state. The macroscopic objects in the experiment are truly massive compared to the atomic scale—the circular drumheads have a diametre similar to the width of a thin human hair.

An April 20,2018 Aalto University press release (also on EurekAlert), which originated the news item, provides more detail,

‘The vibrating bodies are made to interact via a superconducting microwave circuit. The electromagnetic fields in the circuit carry away any thermal disturbances, leaving behind only the quantum mechanical vibrations’, says Professor Sillanpää, describing the experimental setup.

Eliminating all forms of external noise is crucial for the experiments, which is why they have to be conducted at extremely low temperatures near absolute zero, at –273 °C. Remarkably, the experimental approach allows the unusual state of entanglement to persist for long periods of time, in this case up to half an hour. In comparison, measurements on elementary particles have witnessed entanglement to last only tiny fractions of a second.

‘These measurements are challenging but extremely fascinating. In the future, we will attempt to teleport the mechanical vibrations. In quantum teleportation, properties of physical bodies can be transmitted across arbitrary distances using the channel of “spooky action at a distance”. We are still pretty far from Star Trek, though,’ says Dr. Caspar Ockeloen-Korppi, the lead author on the work, who also performed the measurements.

The results demonstrate that it is now possible to have control over the most delicate properties of objects whose size approaches the scale of our daily lives. The achievement opens doors for new kinds of quantum technologies, where the entangled drumheads could be used as routers or sensors. The finding also enables new studies of fundamental physics in, for example, the poorly understood interplay of gravity and quantum mechanics.

The team also included scientists from the University of New South Wales in Australia, the University of Chicago in the USA, and the University of Jyväskylä in Finland, whose theoretical innovations paved the way for the laboratory experiment.

An illustration has been made available,

An illustration of the 15-micrometre-wide drumheads prepared on silicon chips used in the experiment. The drumheads vibrate at a high ultrasound frequency, and the peculiar quantum state predicted by Einstein was created from the vibrations. Image: Aalto University / Petja Hyttinen & Olli Hanhirova, ARKH Architects.

Here’s a link to and a citation for the paper,

Stabilized entanglement of massive mechanical oscillators by C. F. Ockeloen-Korppi, E. Damskägg, J.-M. Pirkkalainen, M. Asjad, A. A. Clerk, F. Massel, M. J. Woolley & M. A. Sillanpää. Nature volume 556, pages478–482 (2018) doi:10.1038/s41586-018-0038-x Published online: 25 April 2018

This paper is behind a paywall.

An artificial enzyme uses light to kill bacteria

An April 4, 2018 news item on ScienceDaily announces a light-based approach to killing bacteria,

Researchers from RMIT University [Australia] have developed a new artificial enzyme that uses light to kill bacteria.

The artificial enzymes could one day be used in the fight against infections, and to keep high-risk public spaces like hospitals free of bacteria like E. coli and Golden Staph.

E. coli can cause dysentery and gastroenteritis, while Golden Staph is the major cause of hospital-acquired secondary infections and chronic wound infections.

Made from tiny nanorods — 1000 times smaller than the thickness of the human hair — the “NanoZymes” use visible light to create highly reactive oxygen species that rapidly break down and kill bacteria.

Lead researcher, Professor Vipul Bansal who is an Australian Future Fellow and Director of RMIT’s Sir Ian Potter NanoBioSensing Facility, said the new NanoZymes offer a major cutting edge over nature’s ability to kill bacteria.

Dead bacteria made beautiful,

Caption: A 3-D rendering of dead bacteria after it has come into contact with the NanoZymes.
Credit: Dr. Chaitali Dekiwadia/ RMIT Microscopy and Microanalysis Facility

An April 5, 2018 RMIT University press release (also on EurekAlert but dated April 4, 2018), which originated the news item, expands on the theme,

“For a number of years we have been attempting to develop artificial enzymes that can fight bacteria, while also offering opportunities to control bacterial infections using external ‘triggers’ and ‘stimuli’,” Bansal said. “Now we have finally cracked it.

“Our NanoZymes are artificial enzymes that combine light with moisture to cause a biochemical reaction that produces OH radicals and breaks down bacteria. Nature’s antibacterial activity does not respond to external triggers such as light.

“We have shown that when shined upon with a flash of white light, the activity of our NanoZymes increases by over 20 times, forming holes in bacterial cells and killing them efficiently.

“This next generation of nanomaterials are likely to offer new opportunities in bacteria free surfaces and controlling spread of infections in public hospitals.”

The NanoZymes work in a solution that mimics the fluid in a wound. This solution could be sprayed onto surfaces.

The NanoZymes are also produced as powders to mix with paints, ceramics and other consumer products. This could mean bacteria-free walls and surfaces in hospitals.

Public toilets — places with high levels of bacteria, and in particular E. coli — are also a prime location for the NanoZymes, and the researchers believe their new technology may even have the potential to create self-cleaning toilet bowls.

While the NanoZymes currently use visible light from torches or similar light sources, in the future they could be activated by sunlight.

The researchers have shown that the NanoZymes work in a lab environment. The team is now evaluating the long-term performance of the NanoZymes in consumer products.

“The next step will be to validate the bacteria killing and wound healing ability of these NanoZymes outside of the lab,” Bansal said.

“This NanoZyme technology has huge potential, and we are seeking interest from appropriate industries for joint product development.”

Here’s a link to and a citation for the paper,

Visible-Light-Triggered Reactive-Oxygen-Species-Mediated Antibacterial Activity of Peroxidase-Mimic CuO Nanorods by Md. Nurul Karim, Mandeep Singh, Pabudi Weerathunge, Pengju Bian, Rongkun Zheng, Chaitali Dekiwadia, Taimur Ahmed, Sumeet Walia, Enrico Della Gaspera, Sanjay Singh, Rajesh Ramanathan, and Vipul Bansal. ACS Appl. Nano Mater., Article ASAP DOI: 10.1021/acsanm.8b00153 Publication Date (Web): March 6, 2018

Copyright © 2018 American Chemical Society

This paper is open access.

Robot radiologists (artificially intelligent doctors)

Mutaz Musa, a physician at New York Presbyterian Hospital/Weill Cornell (Department of Emergency Medicine) and software developer in New York City, has penned an eyeopening opinion piece about artificial intelligence (or robots if you prefer) and the field of radiology. From a June 25, 2018 opinion piece for The Scientist (Note: Links have been removed),

Although artificial intelligence has raised fears of job loss for many, we doctors have thus far enjoyed a smug sense of security. There are signs, however, that the first wave of AI-driven redundancies among doctors is fast approaching. And radiologists seem to be first on the chopping block.

Andrew Ng, founder of online learning platform Coursera and former CTO of “China’s Google,” Baidu, recently announced the development of CheXNet, a convolutional neural net capable of recognizing pneumonia and other thoracic pathologies on chest X-rays better than human radiologists. Earlier this year, a Hungarian group developed a similar system for detecting and classifying features of breast cancer in mammograms. In 2017, Adelaide University researchers published details of a bot capable of matching human radiologist performance in detecting hip fractures. And, of course, Google achieved superhuman proficiency in detecting diabetic retinopathy in fundus photographs, a task outside the scope of most radiologists.

Beyond single, two-dimensional radiographs, a team at Oxford University developed a system for detecting spinal disease from MRI data with a performance equivalent to a human radiologist. Meanwhile, researchers at the University of California, Los Angeles, reported detecting pathology on head CT scans with an error rate more than 20 times lower than a human radiologist.

Although these particular projects are still in the research phase and far from perfect—for instance, often pitting their machines against a limited number of radiologists—the pace of progress alone is telling.

Others have already taken their algorithms out of the lab and into the marketplace. Enlitic, founded by Aussie serial entrepreneur and University of San Francisco researcher Jeremy Howard, is a Bay-Area startup that offers automated X-ray and chest CAT scan interpretation services. Enlitic’s systems putatively can judge the malignancy of nodules up to 50 percent more accurately than a panel of radiologists and identify fractures so small they’d typically be missed by the human eye. One of Enlitic’s largest investors, Capitol Health, owns a network of diagnostic imaging centers throughout Australia, anticipating the broad rollout of this technology. Another Bay-Area startup, Arterys, offers cloud-based medical imaging diagnostics. Arterys’s services extend beyond plain films to cardiac MRIs and CAT scans of the chest and abdomen. And there are many others.

Musa has offered a compelling argument with lots of links to supporting evidence.

[downloaded from https://www.the-scientist.com/news-opinion/opinion–rise-of-the-robot-radiologists-64356]

And evidence keeps mounting, I just stumbled across this June 30, 2018 news item on Xinhuanet.com,

An artificial intelligence (AI) system scored 2:0 against elite human physicians Saturday in two rounds of competitions in diagnosing brain tumors and predicting hematoma expansion in Beijing.

The BioMind AI system, developed by the Artificial Intelligence Research Centre for Neurological Disorders at the Beijing Tiantan Hospital and a research team from the Capital Medical University, made correct diagnoses in 87 percent of 225 cases in about 15 minutes, while a team of 15 senior doctors only achieved 66-percent accuracy.

The AI also gave correct predictions in 83 percent of brain hematoma expansion cases, outperforming the 63-percent accuracy among a group of physicians from renowned hospitals across the country.

The outcomes for human physicians were quite normal and even better than the average accuracy in ordinary hospitals, said Gao Peiyi, head of the radiology department at Tiantan Hospital, a leading institution on neurology and neurosurgery.

To train the AI, developers fed it tens of thousands of images of nervous system-related diseases that the Tiantan Hospital has archived over the past 10 years, making it capable of diagnosing common neurological diseases such as meningioma and glioma with an accuracy rate of over 90 percent, comparable to that of a senior doctor.

All the cases were real and contributed by the hospital, but never used as training material for the AI, according to the organizer.

Wang Yongjun, executive vice president of the Tiantan Hospital, said that he personally did not care very much about who won, because the contest was never intended to pit humans against technology but to help doctors learn and improve [emphasis mine] through interactions with technology.

“I hope through this competition, doctors can experience the power of artificial intelligence. This is especially so for some doctors who are skeptical about artificial intelligence. I hope they can further understand AI and eliminate their fears toward it,” said Wang.

Dr. Lin Yi who participated and lost in the second round, said that she welcomes AI, as it is not a threat but a “friend.” [emphasis mine]

AI will not only reduce the workload but also push doctors to keep learning and improve their skills, said Lin.

Bian Xiuwu, an academician with the Chinese Academy of Science and a member of the competition’s jury, said there has never been an absolute standard correct answer in diagnosing developing diseases, and the AI would only serve as an assistant to doctors in giving preliminary results. [emphasis mine]

Dr. Paul Parizel, former president of the European Society of Radiology and another member of the jury, also agreed that AI will not replace doctors, but will instead function similar to how GPS does for drivers. [emphasis mine]

Dr. Gauden Galea, representative of the World Health Organization in China, said AI is an exciting tool for healthcare but still in the primitive stages.

Based on the size of its population and the huge volume of accessible digital medical data, China has a unique advantage in developing medical AI, according to Galea.

China has introduced a series of plans in developing AI applications in recent years.

In 2017, the State Council issued a development plan on the new generation of Artificial Intelligence and the Ministry of Industry and Information Technology also issued the “Three-Year Action Plan for Promoting the Development of a New Generation of Artificial Intelligence (2018-2020).”

The Action Plan proposed developing medical image-assisted diagnostic systems to support medicine in various fields.

I note the reference to cars and global positioning systems (GPS) and their role as ‘helpers’;, it seems no one at the ‘AI and radiology’ competition has heard of driverless cars. Here’s Musa on those reassuring comments abut how the technology won’t replace experts but rather augment their skills,

To be sure, these services frame themselves as “support products” that “make doctors faster,” rather than replacements that make doctors redundant. This language may reflect a reserved view of the technology, though it likely also represents a marketing strategy keen to avoid threatening or antagonizing incumbents. After all, many of the customers themselves, for now, are radiologists.

Radiology isn’t the only area where experts might find themselves displaced.

Eye experts

It seems inroads have been made by artificial intelligence systems (AI) into the diagnosis of eye diseases. It got the ‘Fast Company’ treatment (exciting new tech, learn all about it) as can be seen further down in this posting. First, here’s a more restrained announcement, from an August 14, 2018 news item on phys.org (Note: A link has been removed),

An artificial intelligence (AI) system, which can recommend the correct referral decision for more than 50 eye diseases, as accurately as experts has been developed by Moorfields Eye Hospital NHS Foundation Trust, DeepMind Health and UCL [University College London].

The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.

Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.

An August 13, 2018 UCL press release, which originated the news item, describes the research and the reasons behind it in more detail,

More than 285 million people worldwide live with some form of sight loss, including more than two million people in the UK. Eye diseases remain one of the biggest causes of sight loss, and many can be prevented with early detection and treatment.

Dr Pearse Keane, NIHR Clinician Scientist at the UCL Institute of Ophthalmology and consultant ophthalmologist at Moorfields Eye Hospital NHS Foundation Trust said: “The number of eye scans we’re performing is growing at a pace much faster than human experts are able to interpret them. There is a risk that this may cause delays in the diagnosis and treatment of sight-threatening diseases, which can be devastating for patients.”

“The AI technology we’re developing is designed to prioritise patients who need to be seen and treated urgently by a doctor or eye care professional. If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight. With further research it could lead to greater consistency and quality of care for patients with eye problems in the future.”

The study, launched in 2016, brought together leading NHS eye health professionals and scientists from UCL and the National Institute for Health Research (NIHR) with some of the UK’s top technologists at DeepMind to investigate whether AI technology could help improve the care of patients with sight-threatening diseases, such as age-related macular degeneration and diabetic eye disease.

Using two types of neural network – mathematical systems for identifying patterns in images or data – the AI system quickly learnt to identify 10 features of eye disease from highly complex optical coherence tomography (OCT) scans. The system was then able to recommend a referral decision based on the most urgent conditions detected.

To establish whether the AI system was making correct referrals, clinicians also viewed the same OCT scans and made their own referral decisions. The study concluded that AI was able to make the right referral recommendation more than 94% of the time, matching the performance of expert clinicians.

The AI has been developed with two unique features which maximise its potential use in eye care. Firstly, the system can provide information that helps explain to eye care professionals how it arrives at its recommendations. This information includes visuals of the features of eye disease it has identified on the OCT scan and the level of confidence the system has in its recommendations, in the form of a percentage. This functionality is crucial in helping clinicians scrutinise the technology’s recommendations and check its accuracy before deciding the type of care and treatment a patient receives.

Secondly, the AI system can be easily applied to different types of eye scanner, not just the specific model on which it was trained. This could significantly increase the number of people who benefit from this technology and future-proof it, so it can still be used even as OCT scanners are upgraded or replaced over time.

The next step is for the research to go through clinical trials to explore how this technology might improve patient care in practice, and regulatory approval before it can be used in hospitals and other clinical settings.

If clinical trials are successful in demonstrating that the technology can be used safely and effectively, Moorfields will be able to use an eventual, regulatory-approved product for free, across all 30 of their UK hospitals and community clinics, for an initial period of five years.

The work that has gone into this project will also help accelerate wider NHS research for many years to come. For example, DeepMind has invested significant resources to clean, curate and label Moorfields’ de-identified research dataset to create one of the most advanced eye research databases in the world.

Moorfields owns this database as a non-commercial public asset, which is already forming the basis of nine separate medical research studies. In addition, Moorfields can also use DeepMind’s trained AI model for future non-commercial research efforts, which could help advance medical research even further.

Mustafa Suleyman, Co-founder and Head of Applied AI at DeepMind Health, said: “We set up DeepMind Health because we believe artificial intelligence can help solve some of society’s biggest health challenges, like avoidable sight loss, which affects millions of people across the globe. These incredibly exciting results take us one step closer to that goal and could, in time, transform the diagnosis, treatment and management of patients with sight threatening eye conditions, not just at Moorfields, but around the world.”

Professor Sir Peng Tee Khaw, director of the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology said: “The results of this pioneering research with DeepMind are very exciting and demonstrate the potential sight-saving impact AI could have for patients. I am in no doubt that AI has a vital role to play in the future of healthcare, particularly when it comes to training and helping medical professionals so that patients benefit from vital treatment earlier than might previously have been possible. This shows the transformative research than can be carried out in the UK combining world leading industry and NIHR/NHS hospital/university partnerships.”

Matt Hancock, Health and Social Care Secretary, said: “This is hugely exciting and exactly the type of technology which will benefit the NHS in the long term and improve patient care – that’s why we fund over a billion pounds a year in health research as part of our long term plan for the NHS.”

Here’s a link to and a citation for the study,

Clinically applicable deep learning for diagnosis and referral in retinal disease by Jeffrey De Fauw, Joseph R. Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O’Donoghue, Daniel Visentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena Chopra, Dominic King, Alan Karthikesalingam, Cían O. Hughes, Rosalind Raine, Julian Hughes, Dawn A. Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis Hassabis, Geraint Rees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane, & Olaf Ronneberger. Nature Medicine (2018) DOI: https://doi.org/10.1038/s41591-018-0107-6 Published 13 August 2018

This paper is behind a paywall.

And now, Melissa Locker’s August 15, 2018 article for Fast Company (Note: Links have been removed),

In a paper published in Nature Medicine on Monday, Google’s DeepMind subsidiary, UCL, and researchers at Moorfields Eye Hospital showed off their new AI system. The researchers used deep learning to create algorithm-driven software that can identify common patterns in data culled from dozens of common eye diseases from 3D scans. The result is an AI that can identify more than 50 diseases with incredible accuracy and can then refer patients to a specialist. Even more important, though, is that the AI can explain why a diagnosis was made, indicating which part of the scan prompted the outcome. It’s an important step in both medicine and in making AIs slightly more human

The editor or writer has even highlighted the sentence about the system’s accuracy—not just good but incredible!

I will be publishing something soon [my August 21, 2018 posting] which highlights some of the questions one might want to ask about AI and medicine before diving headfirst into this brave new world of medicine.

The joys of an electronic ‘pill’: Could Canadian Olympic athletes’ training be hacked?

Lori Ewing (Canadian Press) in an  August 3, 2018 article on the Canadian Broadcasting Corporation news website, heralds a new technology intended for the 2020 Olympics in Tokyo (Japan) but being tested now for the 2018 North American, Central American and Caribbean Athletics Association (NACAC) Track & Field Championships, known as Toronto 2018: Track & Field in the 6ix (Aug. 10-12, 2018) competition.

It’s described as a ‘computerized pill’ that will allow athletes to regulate their body temperature during competition or training workouts, from the August 3, 2018 article,

“We can take someone like Evan [Dunfee, a race walker], have him swallow the little pill, do a full four-hour workout, and then come back and download the whole thing, so we get from data core temperature every 30 seconds through that whole workout,” said Trent Stellingwerff, a sport scientist who works with Canada’s Olympic athletes.

“The two biggest factors of core temperature are obviously the outdoor humidex, heat and humidity, but also exercise intensity.”

Bluetooth technology allows Stellingwerff to gather immediate data with a handheld device — think a tricorder in “Star Trek.” The ingestible device also stores measurements for up to 16 hours when away from the monitor which can be wirelessly transmitted when back in range.

“That pill is going to change the way that we understand how the body responds to heat, because we just get so much information that wasn’t possible before,” Dunfee said. “Swallow a pill, after the race or after the training session, Trent will come up, and just hold the phone [emphasis mine] to your stomach and download all the information. It’s pretty crazy.”

First off, it’s probably not a pill or tablet but a gelcap and it sounds like the device is a wireless biosensor. As Ewing notes, the device collects data and transmits it.

Here’s how the French company, BodyCap, supplying the technology describes their product, from the company’s e-Celsius Performance webpage, (assuming this is the product being used),

Continuous core body temperature measurement

Main applications are:

Risk reduction for people in extreme situations, such as elite athletes. During exercise in a hot environment, thermal stress is amplified by the external temperature and the environment’s humidity. The saturation of the body’s thermoregulation mechanism can quickly cause hyperthermia to levels that may cause nausea, fainting or death.

Performance optimisation for elite athletes.This ingestible pill leaves the user fully mobile. The device keeps a continuous record of temperature during training session, competition and during the recovery phase. The data can then be used to correlate thermoregulation with performances. This enable the development of customised training protocols for each athlete.

e-Celsius Performance® can be used for all sports, including water sports. Its application is best suited to sports that are physically intensive like football, rugby, cycling, long distance running, tennis or those that take place in environments with extreme temperature conditions, like diving or skiing.

e-Celsius Performance®, is a miniaturised ingestible electronic pill that wirelessly transmits a continuous measurement of gastrointestinal temperature. [emphasis mine]

The data are stored on a monitor called e-Viewer Performance®. This device [emphases mine] shows alerts if the measurement is outside the desired range. The activation box is used to turn the pill on from standby mode and connect the e-Celsius Performance pill with the monitor for data collection in either real time or by recovery from the internal memory of e-Celsius Performance®. Each monitor can be used with up to three pills at once to enable extended use.

The monitor’s interface allows the user to download data to a PC/ Mac for storage. The pill is safe, non-invasive and easy to use, leaving the gastric system after one or two days, [emphasis mine] depending on individual transit time.

I found Dunfee’s description mildly confusing but that can be traced to his mention of wireless transmission to a phone. Ewing describes a handheld device which is consistent with the company’s product description. There is no mention of the potential for hacking but I would hope Athletics Canada and BodyCap are keeping up with current concerns over hacking and interference (e.g., Facebook/Cambridge Analytica, Russians and the 2016 US election, Roberto Rocha’s Aug. 3, 2018 article for CBC titled: Data sheds light on how Russian Twitter trolls targeted Canadians, etc.).

Moving on, this type of technology was first featured here in a February 11, 2014 posting (scroll down to the gif where an electronic circuit dissolves in water) and again in a November 23, 2015 posting about wearable and ingestible technologies but this is the first real life application I’ve seen for it.

Coincidentally, an August 2, 2018 Frontiers [Publishing] news release on EurekAlert announced this piece of research (published in June 2018) questioning whether we need this much data and whether these devices work as promoted,

Wearable [and, in the future, ingestible?] devices are increasingly bought to track and measure health and sports performance: [emphasis mine] from the number of steps walked each day to a person’s metabolic efficiency, from the quality of brain function to the quantity of oxygen inhaled while asleep. But the truth is we know very little about how well these sensors and machines work [emphasis mine]– let alone whether they deliver useful information, according to a new review published in Frontiers in Physiology.

“Despite the fact that we live in an era of ‘big data,’ we know surprisingly little about the suitability or effectiveness of these devices,” says lead author Dr Jonathan Peake of the School of Biomedical Sciences and Institute of Health and Biomedical Innovation at the Queensland University of Technology in Australia. “Only five percent of these devices have been formally validated.”

The authors reviewed information on devices used both by everyday people desiring to keep track of their physical and psychological health and by athletes training to achieve certain performance levels. [emphases mine] The devices — ranging from so-called wrist trackers to smart garments and body sensors [emphasis mine] designed to track our body’s vital signs and responses to stress and environmental influences — fall into six categories:

  • devices for monitoring hydration status and metabolism
  • devices, garments and mobile applications for monitoring physical and psychological stress
  • wearable devices that provide physical biofeedback (e.g., muscle stimulation, haptic feedback)
  • devices that provide cognitive feedback and training
  • devices and applications for monitoring and promoting sleep
  • devices and applications for evaluating concussion

The authors investigated key issues, such as: what the technology claims to do; whether the technology has been independently validated against some recognized standards; whether the technology is reliable and what, if any, calibration is needed; and finally, whether the item is commercially available or still under development.

The authors say that technology developed for research purposes generally seems to be more credible than devices created purely for commercial reasons.

“What is critical to understand here is that while most of these technologies are not labeled as ‘medical devices’ per se, their very existence, let alone the accompanying marketing, conveys a sensibility that they can be used to measure a standard of health,” says Peake. “There are ethical issues with this assumption that need to be addressed.” [emphases mine]

For example, self-diagnosis based on self-gathered data could be inconsistent with clinical analysis based on a medical professional’s assessment. And just as body mass index charts of the past really only provided general guidelines and didn’t take into account a person’s genetic predisposition or athletic build, today’s technology is similarly limited.

The authors are particularly concerned about those technologies that seek to confirm or correlate whether someone has sustained or recovered from a concussion, whether from sports or military service.

“We have to be very careful here because there is so much variability,” says Peake. “The technology could be quite useful, but it can’t and should never replace assessment by a trained medical professional.”

Speaking generally again now, Peake says it is important to establish whether using wearable devices affects people’s knowledge and attitude about their own health and whether paying such close attention to our bodies could in fact create a harmful obsession with personal health, either for individuals using the devices, or for family members. Still, self-monitoring may reveal undiagnosed health problems, said Peake, although population data is more likely to point to false positives.

“What we do know is that we need to start studying these devices and the trends they are creating,” says Peake. “This is a booming industry.”

In fact, a March 2018 study by P&S Market Research indicates the wearable market is expected to generate $48.2 billion in revenue by 2023. That’s a mere five years into the future.”

The authors highlight a number of areas for investigation in order to develop reasonable consumer policies around this growing industry. These include how rigorously the device/technology has been evaluated and the strength of evidence that the device/technology actually produces the desired outcomes.

“And I’ll add a final question: Is wearing a device that continuously tracks your body’s actions, your brain activity, and your metabolic function — then wirelessly transmits that data to either a cloud-based databank or some other storage — safe, for users? Will it help us improve our health?” asked Peake. “We need to ask these questions and research the answers.”

The authors were not examining ingestible biosensors nor were they examining any issues related to data about core temperatures but it would seem that some of the same issues could apply especially if and when this technology is brought to the consumer market.

Here’s a link to the and a citation for the paper,

Critical Review of Consumer Wearables, Mobile Applications, and Equipment for Providing Biofeedback, Monitoring Stress, and Sleep in Physically Active Populations by Jonathan M. Peake, Graham Kerr, and John P. Sullivan. Front. Physiol., 28 June 2018 | https://doi.org/10.3389/fphys.2018.00743

This paper is open access.