Tag Archives: University of Exeter

Your garden as a ‘living artwork’ for insects

Pollinator Pathmaker Eden Project Edition. Photo Royston Hunt. Courtesy Alexandra Daisy Ginsberg Ltd

I suppose you could call this a kind of citizen science as well as an art project. A September 11, 2024 news item on phys.org describes a new scientific art project designed for insects,

Gardens can become “living artworks” to help prevent the disastrous decline of pollinating insects, according to researchers working on a new project.

Pollinator Pathmaker is an artwork by Dr. Alexandra Daisy Ginsberg that uses an algorithm to generate unique planting designs that prioritize pollinators’ needs over human aesthetic tastes.

A September 11, 2024 University of Exeter press release (also on EurekAlert), which originated the news item, provides more detail about the research project,

Originally commissioned by the Eden Project in Cornwall in 2021, the general public can access the artist’s online tool (www.pollinator.art) to design and plant their own living artwork for local pollinators.

While pollinators – including bees, butterflies, moths, wasps, ants and beetles – are the main audience, the results may also be appealing to humans.

Pollinator Pathmaker allows users to input the specific details of their garden, including size of plot, location conditions, soil type, and play with how the algorithm will “solve” the planting to optimise it for pollinator diversity, rather than how it looks to humans.

The new research project – led by the universities of Exeter and Edinburgh – has received funding from UK Research and Innovation as part of a new cross research council responsive mode scheme to support exciting interdisciplinary research.

The project aims to demonstrate how an artwork can help to drive innovative ecological conservation, by asking residents in the village of Constantine in Cornwall to plant a network of Pollinator Pathmaker living artworks in their gardens. These will become part of the multidisciplinary study.

“Pollinators are declining rapidly worldwide and – with urban and agricultural areas often hostile to them – gardens are increasingly vital refuges,” said Dr Christopher Kaiser-Bunbury, of the Centre for Ecology and Conservation on Exeter’s Penryn Campus in Cornwall.

“Our research project brings together art, ecology, social science and philosophy to reimagine what gardens are, and what they’re for.

“By reflecting on fundamental questions like these, we will empower people to rethink the way they see gardens.

 “We hope Pollinator Pathmaker will help to create connected networks of pollinator-friendly gardens across towns and cities.”

Good luck with the pollinators!

Study says quantum computing will radically alter the application of copyright law

I was expecting more speculation about the possibilities that quantum computing might afford with regard to copyright law. According to the press release, this study is primarily focused on the impact that greater computing speed and power will have on copyright and, presumably, other forms of intellectual property. From a March 4, 2024 University of Exeter press release (also on EurekAlert),

Quantum computing will radically transform the application of the law – challenging long-held notions of copyright, a new study says.

Faster computing will bring exponentially greater possibilities in the tracking and tracing of the legal owners of art, music, culture and books.  

This is likely to mean more copyright infringements, but also make it easier for lawyers to clamp down on lawbreaking. However, faster computers will also be able to potentially break and get around certain older enforcement technologies.

The research says quantum computing could lead to an “exponentially” greater number of re-uses of copyright works without permission, and tracking of anyone breaking the law is likely to be possible in many circumstances.

Dr James Griffin, from the University of Exeter [UK] Law School, who led the study, said: “Quantum computers will have sufficient computing power to be able to make judgement calls [emphasis mine] as to whether or not re-uses are likely to be copyright infringements, skirting the boundaries of the law in a way that has yet to be fully tested in practice.

“Copyright infringements could become more commonplace due to the use of quantum computers, but the enforcement of such laws could also increase. This will potentially favour certain forms of content over others.”

Content with embedded quantum watermarks will be more likely to be protected than earlier forms of content without such watermarks. The exponential speed of quantum computing brings will make it easier to be able to produce more copies of existing copyright works.

Existing artworks will be altered on a large scale for use in AI-generated artistic works. Enhanced computing power will see the reuse of elements of films such as scenes, characters, music and scripts.

Dr Griffin said: “The nature of quantum computing also means that there could be more enforcement of copyright law. we can expect that there will be more use of technological protection measures, as well as copyright management information devices such as watermarks, and more use of filtering mechanisms to be able to detect, prevent and contain copyright infringements.

Copyright management information techniques are better suited to quantum computers because they allow for more finely grained analysis of potential infringements, and because they require greater computing power to be able to be applied both broadly to computer software and the actions of the users of such software.

Dr Griffin said: “A quantum paradox [emphasis mine] is thus developing, in that there are likely to be more infringements possible, whilst technical devices will simultaneously develop in an attempt to prevent any alleged possible or potential copyright infringements. Content will increasingly be made in a manner difficult to break, with enhanced encryption.

“Meanwhile, due to the expense of large-scale quantum computing, we can expect more content to be streamed and less owned; content will be kept remotely in order to enhance the notion that utilising such data in breach of contractual terms would be akin to breaking into someone’s physical house or committing a similar fraudulent activity.

Quantum computers allow enable creators to make a large number of small-scale works. This could pose challenges regarding the tests of copyright originality. For example story written for a quantum computer game could be constantly changing and evolving according to the actions of the player, and not just simply according to predefined paths but utilising complex AI algorithms. [emphasis mine]

Some interesting issues are raised in this press release. (1) Can any computer, quantum or otherwise, make a judgment call? (2) The ‘quantum paradox’ seems like a perfectly predictable outcome. After all, regular computers facilitated all kinds of new opportunities for infringement and prevention. What makes this a ‘quantum paradox’? (3) The evolving computer game seems more like an AI issue. What makes this a quantum computing problem? The answers to these questions may be in the study but that presents a problem.

Ordinarily, I’d offer a link to the study but it’s not accessible until 2025. Here’s a citation,

Quantum Computing and Copyright Law: A Wave of Change or a Mere Irrelevant Particle? by James G. H. Griffin. Intellectual Property Quarterly 2024 Issue 1, pp. 22 – 39. Published February 21, 2024. Under embargo until 21 February 2025 [emphasis mine] in compliance with publisher policy

There is an online record for the study on this Open Research Exeter (ORE) webpage where you can request a copy of the paper.

Fishes ‘talk’ and ‘sing’

This posting started out with two items and then, it became more. If you’re interested in marine bioacoustics especially the work that’s been announced in the last four months, read on.

Fish songs

This item is about how fish sounds (songs) signify successful coral reef restoration got coverage on BBC (British Broadcasting Corporation), CBC (Canadian Broadcasting Corporation) and elsewhere. This video is courtesy of the Guardian Newspaper,

Whoops and grunts: ‘bizarre’ fish songs raise hopes for coral reef recovery https://www.theguardian.com/environme…

A December 8, 2021 University of Exeter press release (also on EurekAlert) explains why the sounds give hope (Note: Links have been removed),

Newly discovered fish songs demonstrate reef restoration success

Whoops, croaks, growls, raspberries and foghorns are among the sounds that demonstrate the success of a coral reef restoration project.

Thousands of square metres of coral are being grown on previously destroyed reefs in Indonesia, but previously it was unclear whether these new corals would revive the entire reef ecosystem.

Now a new study, led by researchers from the University of Exeter and the University of Bristol, finds a heathy, diverse soundscape on the restored reefs.

These sounds – many of which have never been recorded before – can be used alongside visual observations to monitor these vital ecosystems.

“Restoration projects can be successful at growing coral, but that’s only part of the ecosystem,” said lead author Dr Tim Lamont, of the University of Exeter and the Mars Coral Reef Restoration Project, which is restoring the reefs in central Indonesia.

“This study provides exciting evidence that restoration really works for the other reef creatures too – by listening to the reefs, we’ve documented the return of a diverse range of animals.”

Professor Steve Simpson, from the University of Bristol, added: “Some of the sounds we recorded are really bizarre, and new to us as scientists.  

“We have a lot still to learn about what they all mean and the animals that are making them. But for now, it’s amazing to be able to hear the ecosystem come back to life.”

The soundscapes of the restored reefs are not identical to those of existing healthy reefs – but the diversity of sounds is similar, suggesting a healthy and functioning ecosystem.

There were significantly more fish sounds recorded on both healthy and restored reefs than on degraded reefs.

This study used acoustic recordings taken in 2018 and 2019 as part of the monitoring programme for the Mars Coral Reef Restoration Project.

The results are positive for the project’s approach, in which hexagonal metal frames called ‘Reef Stars’ are seeded with coral and laid over a large area. The Reef Stars stabilise loose rubble and kickstart rapid coral growth, leading to the revival of the wider ecosystem.  

Mochyudho Prasetya, of the Mars Coral Reef Restoration Project, said: “We have been restoring and monitoring these reefs here in Indonesia for many years. Now it is amazing to see more and more evidence that our work is helping the reefs come back to life.”

Professor David Smith, Chief Marine Scientist for Mars Incorporated, added: “When the soundscape comes back like this, the reef has a better chance of becoming self-sustaining because those sounds attract more animals that maintain and diversify reef populations.”

Asked about the multiple threats facing coral reefs, including climate change and water pollution, Dr Lamont said: “If we don’t address these wider problems, conditions for reefs will get more and more hostile, and eventually restoration will become impossible.

“Our study shows that reef restoration can really work, but it’s only part of a solution that must also include rapid action on climate change and other threats to reefs worldwide.”

The study was partly funded by the Natural Environment Research Council and the Swiss National Science Foundation.

Here’s a link to and a citation for the paper,

The sound of recovery: Coral reef restoration success is detectable in the soundscape by Timothy A. C. Lamont, Ben Williams, Lucille Chapuis, Mochyudho E. Prasetya, Marie J. Seraphim, Harry R. Harding, Eleanor B. May, Noel Janetski, Jamaluddin Jompa, David J. Smith, Andrew N. Radford, Stephen D. Simpson. Journal of Applied Ecology DOI: https://doi.org/10.1111/1365-2664.14089 First published: 07 December 2021

This paper is open access.

You can find the MARS Coral Reef Restoration Project here.

Fish talk

There is one item here. This research from Cornell University also features the sounds fish make. It’s no surprise given the attention being given to sound that the Cornell Lab of Ornithology is involved. In addition to the lab’s main focus, birds, many other animal sounds are gathered too.

A January 27, 2022 Cornell University news release (also on EurekAlert) describes ‘fish talk’,

There’s a whole lot of talking going on beneath the waves. A new study from Cornell University finds that fish are far more likely to communicate with sound than generally thought—and some fish have been doing this for at least 155 million years. These findings were just published in the journal Ichthyology & Herpetology.

“We’ve known for a long time that some fish make sounds,” said lead author Aaron Rice, a researcher at the K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab of Ornithology [emphasis mine]. “But fish sounds were always perceived as rare oddities. We wanted to know if these were one-offs or if there was a broader pattern for acoustic communication in fishes.”

The authors looked at a branch of fishes called the ray-finned fishes. These are vertebrates (having a backbone) that comprise 99% of the world’s known species of fishes. They found 175 families that contain two-thirds of fish species that do, or are likely to, communicate with sound. By examining the fish family tree, study authors found that sound was so important, it evolved at least 33 separate times over millions of years.

“Thanks to decades of basic research on the evolutionary relationships of fishes, we can now explore many questions about how different functions and behaviors evolved in the approximately 35,000 known species of fishes,” said co-author William E. Bemis ’76, Cornell professor of ecology and evolutionary biology in the College of Agriculture and Life Sciences. “We’re getting away from a strictly human-centric way of thinking. What we learn could give us some insight on the drivers of sound communication and how it continues to evolve.”

The scientists used three sources of information: existing recordings and scientific papers describing fish sounds; the known anatomy of a fish—whether they have the right tools for making sounds, such as certain bones, an air bladder, and sound-specific muscles; and references in 19th century literature before underwater microphones were invented.
 
“Sound communication is often overlooked within fishes, yet they make up more than half of all living vertebrate species,” said Andrew Bass, co-lead author and the Horace White Professor of Neurobiology and Behavior in the College of Arts and Sciences. “They’ve probably been overlooked because fishes are not easily heard or seen, and the science of underwater acoustic communication has primarily focused on whales and dolphins. But fishes have voices, too!”
 
Listen:

Oyster ToadfishWilliam Tavolga, Macaulay Library

Longspine squirrelfishHoward Winn, Macaulay Library 

Banded drumDonald Batz, Macaulay Library

Midshipman, Andrew Bass, Macaulay Library

What are the fish talking about? Pretty much the same things we all talk about—sex and food. Rice says the fish are either trying to attract a mate, defend a food source or territory, or let others know where they are. Even some of the common names for fish are based on the sounds they make, such as grunts, croakers, hog fish, squeaking catfish, trumpeters, and many more.
 
Rice intends to keep tracking the discovery of sound in fish species and add them to his growing database (see supplemental material, Table S1)—a project he began 20 years ago with study co-authors Ingrid Kaatz ’85, MS ’92, and Philip Lobel, a professor of biology at Boston University. Their collaboration has continued and expanded since Rice came to Cornell.
 
“This introduces sound communication to so many more groups than we ever thought,” said Rice. “Fish do everything. They breathe air, they fly, they eat anything and everything—at this point, nothing would surprise me about fishes and the sounds that they can make.”

The research was partly funded by the National Science Foundation, the U.S. Bureau of Ocean Energy Management, the Tontogany Creek Fund, and the Cornell Lab of Ornithology.

I’ve embedded one of the audio files, Oyster Toadfish (William Tavolga) here,

Here’s a link to and a citation for the paper,

Evolutionary Patterns in Sound Production across Fishes by Aaron N. Rice, Stacy C. Farina, Andrea J. Makowski, Ingrid M. Kaatz, Phillip S. Lobel, William E. Bemis, Andrew H. Bass. Ichthyology & Herpetology, 110(1):1-12 (2022) DOI: https://doi.org/10.1643/i2020172 20 January 2022

This paper is open access.

Marine sound libraries

Thanks to Aly Laube’s March 2, 2022 article on the DailyHive.com, I learned of Kieran Cox’s work at the University of Victoria and FishSounds (Note: Links have been removed),

Fish have conversations and a group of researchers made a website to document them. 

It’s so much fun to peruse and probably the good news you need. Listen to a Bocon toadfish “boop” or this sablefish tick, which is slightly creepier, but still pretty cool. This streaked gurnard can growl, and this grumpy Atlantic cod can grunt.

The technical term for “fishy conversations” is “marine bioacoustics,” which is what Kieran Cox specializes in. They can be used to track, monitor, and learn more about aquatic wildlife.

The doctor of marine biology at the University of Victoria co-authored an article about fish sounds in Reviews in Fish Biology and Fisheries called “A Quantitative Inventory of Global Soniferous Fish Diversity.”

It presents findings from his process, helping create FishSounds.net. He and his team looked over over 3,000 documents from 834 studies to put together the library of 989 fish species.

A March 2, 2022 University of Victoria news release provides more information about the work and the research team (Note: Links have been removed),

Fascinating soundscapes exist beneath rivers, lakes and oceans. An unexpected sound source are fish making their own unique and entertaining noise from guttural grunts to high-pitched squeals. Underwater noise is a vital part of marine ecosystems, and thanks to almost 150 years of researchers documenting those sounds we know hundreds of fish species contribute their distinctive sounds. Although fish are the largest and most diverse group of sound-producing vertebrates in water, there was no record of which fish species make sound and the sounds they produce. For the very first time, there is now a digital place where that data can be freely accessed or contributed to, an online repository, a global inventory of fish sounds.

Kieran Cox co-authored the published article about fish sounds and their value in Reviews in Fish Biology and Fisheries while completing his Ph.D in marine biology at the University of Victoria. Cox recently began a Liber Ero post-doctoral collaboration with Francis Juanes that aims to integrate marine bioacoustics into the conservation of Canada’s oceans. Liber Ero program is devoted to promoting applied and evidence-based conservation in Canada.

The international group of researchers includes UVic, the University of Florida, Universidade de São Paulo, and Marine Environmental Research Infrastructure for Data Integration and Application Network (MERIDIAN) [emphasis mine] have launched the first ever, dedicated website focused on fish and their sounds: FishSounds.net. …

According to Cox, “This data is absolutely critical to our efforts. Without it, we were having a one-sided conversation about how noise impacts marine life. Now we can better understand the contributions fish make to soundscapes and examine which species may be most impacted by noise pollution.” Cox, an avid scuba diver, remembers his first dive when the distinct sound of parrotfish eating coral resonated over the reef, “It’s thrilling to know we are now archiving vital ecological information and making it freely available to the public, I feel like my younger self would be very proud of this effort.” …

There’s also a March 2, 2022 University of Florida news release on EurekAlert about FishSounds which adds more details about the work (Note: Links have been removed),

Cows moo. Wolves howl. Birds tweet. And fish, it turns out, make all sorts of ruckus.

“People are often surprised to learn that fish make sounds,” said Audrey Looby, a doctoral candidate at the University of Florida. “But you could make the case that they are as important for understanding fish as bird sounds are for studying birds.”

The sounds of many animals are well documented. Go online, and you’ll find plenty of resources for bird calls and whale songs. However, a global library for fish sounds used to be unheard of.

That’s why Looby, University of Victoria collaborator Kieran Cox and an international team of researchers created FishSounds.net, the first online, interactive fish sounds repository of its kind.

“There’s no standard system yet for naming fish sounds, so our project uses the sound names researchers have come up with,” Looby said. “And who doesn’t love a fish that boops?”

The library’s creators hope to add a feature that will allow people to submit their own fish sound recordings. Other interactive features, such as a world map with clickable fish sound data points, are also in the works.

Fish make sound in many ways. Some, like the toadfish, have evolved organs or other structures in their bodies that produce what scientists call active sounds. Other fish produce incidental or passive sounds, like chewing or splashing, but even passive sounds can still convey information.

Scientists think fish evolved to make sound because sound is an effective way to communicate underwater. Sound travels faster under water than it does through air, and in low visibility settings, it ensures the message still reaches an audience.

“Fish sounds contain a lot of important information,” said Looby, who is pursuing a doctorate in fisheries and aquatic sciences at the UF/IFAS College of Agricultural and Life Sciences. “Fish may communicate about territory, predators, food and reproduction. And when we can match fish sounds to fish species, their sounds are a kind of calling card that can tell us what kinds of fish are in an area and what they are doing.”

Knowing the location and movements of fish species is critical for environmental monitoring, fisheries management and conservation efforts. In the future, marine, estuarine or freshwater ecologists could use hydrophones — special underwater microphones — to gather data on fish species’ whereabouts. But first, they will need to be able to identify which fish they are hearing, and that’s where the fish sounds database can assist.

FishSounds.net emerged from the research team’s efforts to gather and review the existing scientific literature on fish sounds. An article synthesizing that literature has just been published in Reviews in Fish Biology and Fisheries.

In the article, the researchers reviewed scientific reports of fish sounds going back almost 150 years. They found that a little under a thousand fish species are known to make active sounds, and several hundred species were studied for their passive sounds. However, these are probably both underestimates, Cox explained.

Here’s a link to and a citation for the paper,

A quantitative inventory of global soniferous fish diversity by Audrey Looby, Kieran Cox, Santiago Bravo, Rodney Rountree, Francis Juanes, Laura K. Reynolds & Charles W. Martin. Reviews in Fish Biology and Fisheries (2022) DOI: https://doi.org/10.1007/s11160-022-09702-1 Published 18 February 2022

This paper is behind a paywall.

Finally, there’s GLUBS. A comprehensive February 27, 2022 Rockefeller University news release on EurekAlert announces a proposal for the Global Library of Underwater Biological Sounds (GLUBS), Note 1: Links have been removed; Note 2: If you’re interested in the topic, I recommend reading either the original February 27, 2022 Rockefeller University news release with its numerous embedded images, audio files, and links to marine audio libraries,

Of the roughly 250,000 known marine species, scientists think all ~126 marine mammals emit sounds – the ‘thwop’, ‘muah’, and ‘boop’s of a humpback whale, for example, or the boing of a minke whale. Audible too are at least 100 invertebrates, 1,000 of the world’s 34,000 known fish species, and likely many thousands more.

Now a team of 17 experts from nine countries has set a goal [emphasis mine] of gathering on a single platform huge collections of aquatic life’s tell-tale sounds, and expanding it using new enabling technologies – from highly sophisticated ocean hydrophones and artificial intelligence learning systems to phone apps and underwater GoPros used by citizen scientists.

The Global Library of Underwater Biological Sounds, “GLUBS,” will underpin a novel non-invasive, affordable way for scientists to listen in on life in marine, brackish and freshwaters, monitor its changing diversity, distribution and abundance, and identify new species. Using the acoustic properties of underwater soundscapes can also characterize an ecosystem’s type and condition.

“A database of unidentified sounds is, in some ways, as important as one for known sources,” the scientists say. “As the field progresses, new unidentified sounds will be collected, and more unidentified sounds can be matched to species.”

This can be “particularly important for high-biodiversity systems such as coral reefs, where even a short recording can pick up multiple animal sounds.”

Existing libraries of undersea sounds (several of which are listed with hyperlinks below) “often focus on species of interest that are targeted by the host institute’s researchers,” the paper says, and several are nationally-focussed. Few libraries identify what is missing from their catalogs, which the proposed global library would.

“A global reference library of underwater biological sounds would increase the ability for more researchers in more locations to broaden the number of species assessed within their datasets and to identify sounds they personally do not recognize,” the paper says.

The scientists note that listening to the sea has revealed great whales swimming in unexpected places, new species and new sounds.

With sound, “biologically important areas can be mapped; spawning grounds, essential fish habitat, and migration pathways can be delineated…These and other questions can be queried on broader scales if we have a global catalog of sounds.”

Meanwhile, comparing sounds from a single species across broad areas and times helps understand their diversity and evolution.

Numerous marine animals are cosmopolitan, the paper says, “either as wide-roaming individuals, such as the great whales, or as broadly distributed species, such as many fishes.”

Fin whale calls, for example, can differ among populations in the Northern and Southern hemispheres, and over seasons, whereas the call of pilot whales are similar worldwide, even though their home ranges do not (or no longer) cross the equator.

Some fishes even seem to develop geographic ‘dialects’ or completely different signal structures among regions, several of which evolve over time.

Madagascar’s skunk anemonefish … , for example, produces different agonistic (fight-related) sounds than those in Indonesia, while differences in the song of humpback whales have been observed across ocean basins.

Phone apps, underwater GoPros and citizen science

Much like BirdNet and FrogID, a library of underwater biological sounds and automated detection algorithms would be useful not only for the scientific, industry and marine management communities but also for users with a general interest.

“Acoustic technology has reached the stage where a hydrophone can be connected to a mobile phone so people can listen to fishes and whales in the rivers and seas around them. Therefore, sound libraries are becoming invaluable to citizen scientists and the general public,” the paper adds.

And citizen scientists could be of great help to the library by uploading the results of, for example, the River Listening app (www.riverlistening.com), which encourages the public to listen to and record fish sounds in rivers and coastal waters.

Low-cost hydrophones and recording systems (such as the Hydromoth) are increasingly available and waterproof recreational recording systems (such as GoPros) can also collect underwater biological sounds.

Here’s a link to and a citation for the paper,

Sounding the Call for a Global Library of Underwater Biological Sounds by Miles J. G. Parsons, Tzu-Hao Lin, T. Aran Mooney, Christine Erbe, Francis Juanes, Marc Lammers, Songhai Li, Simon Linke, Audrey Looby, Sophie L. Nedelec, Ilse Van Opzeeland, Craig Radford, Aaron N. Rice, Laela Sayigh, Jenni Stanley, Edward Urban and Lucia Di Iorio. Front. Ecol. Evol., 08 February 2022 DOI: https://doi.org/10.3389/fevo.2022.810156 Published: 08 February 2022.

This paper appears to be open access.

Tough colour and the flower beetle

The flower beetle Torynorrhina flammea. [downloaded from https://www.nanowerk.com/nanotechnology-news2/newsid=58269.php]

That is one gorgeous beetle and a June 17, 2021 news item on Nanowerk reveals that it features in a structural colour story (i.e, how structures rather than pigments create colour),

The unique mechanical and optical properties found in the exoskeleton of a humble Asian beetle has the potential to offer a fascinating new insight into how to develop new, effective bio-inspired technologies.

Pioneering new research by a team of international scientists, including Professor Pete Vukusic from the University of Exeter, has revealed a distinctive, and previously unknown property within the carapace of the flower beetle – a member of the scarab beetle family.

The study showed that the beetle has small micropillars within the carapace – or the upper section of the exoskeleton – that give the insect both strength and flexibility to withstand damage very effectively.

Crucially, these micropillars are incorporated into highly regular layering in the exoskeleton that concurrently give the beetle an intensely bright metallic colour appearance.

A June 18, 2021 University of Exeter press release (also on EurekAlert but published June 17, 2021), delves further into the researchers’ new insights,

For this new study, the scientists used sophisticated modelling techniques to determine which of the two functions – very high mechanical strength or conspicuously bright colour – were more important to the survival of the beetle.

They found that although these micropillars do create a highly enhanced toughness of the beetle shell, they were most beneficial for optimising the scattering of coloured light that generates its conspicuous appearance.

The research is published this week in the leading journal, Proceedings of the National Academy of Sciences, PNAS.

Professor Vukusic, one of three leads of the research along with Professor Li at Virginia Tech and Professor Kolle at MIT [Massachusetts Institute of Technology], said: “The astonishing insights generated by this research have only been possible through close collaborative work between Virginia Tech, MIT, Harvard and Exeter, in labs that trailblaze the fields of materials, mechanics and optics. Our follow-up venture to make use of these bio-inspired principles will be an even more exciting journey.”.

The seeds of the pioneering research were sown more than 16 years ago as part of a short project created by Professor Vukusic in the Exeter undergraduate Physics labs. Those early tests and measurements, made by enthusiastic undergraduate students, revealed the possibility of intriguing multifunctionality.

The original students examined the form and structure of beetles’ carapce to try to understand the simple origin of their colour. They noticed for the first time, however, the presence of strength-inducing micropillars.

Professor Vukusic ultimately carried these initial findings to collaborators Professor Ling Li at Virginia Tech and Professor Mathias Kolle at Harvard and then MIT who specialise in the materials sciences and applied optics. Using much more sophisticated measurement and modelling techniques, the combined research team were also to confirm the unique role played by the micropillars in enhancing the beetles’ strength and toughness without compromising its intense metallic colour.

The results from the study could also help inspire a new generation of bio-inspired materials, as well as the more traditional evolutionary research.

By understanding which of the functions provides the greater benefit to these beetles, scientists can develop new techniques to replicate and reproduce the exoskeleton structure, while ensuring that it has brilliant colour appearance with highly effective strength and toughness.

Professor Vukusic added: “Such natural systems as these never fail to impress with the way in which they perform, be it optical, mechanical or in another area of function. The way in which their optical or mechanical properties appear highly tolerant of all manner of imperfections too, continues to offer lessons to us about scientific and technological avenues we absolutely should explore. There is exciting science ahead of us on this journey.”

Here’s a link to and a citation for the paper,

Microstructural design for mechanical–optical multifunctionality in the exoskeleton of the flower beetle Torynorrhina flammea by Zian Jia, Matheus C. Fernandes, Zhifei Deng, Ting Yang, Qiuting Zhang, Alfie Lethbridge, Jie Yin, Jae-Hwang Lee, Lin Han, James C. Weaver, Katia Bertoldi, Joanna Aizenberg, Mathias Kolle, Pete Vukusic, and Ling Li. PNAS June 22, 2021 118 (25) e2101017118; DOI: https://doi.org/10.1073/pnas.2101017118

This paper is behind a paywall.

Council of Canadian Academies and its expert panel for the AI for Science and Engineering project

There seems to be an explosion (metaphorically and only by Canadian standards) of interest in public perceptions/engagement/awareness of artificial intelligence (see my March 29, 2021 posting “Canada launches its AI dialogues” and these dialogues run until April 30, 2021 plus there’s this April 6, 2021 posting “UNESCO’s Call for Proposals to highlight blind spots in AI Development open ’til May 2, 2021” which was launched in cooperation with Mila-Québec Artificial Intelligence Institute).

Now there’s this, in a March 31, 2020 Council of Canadian Academies (CCA) news release, four new projects were announced. (Admittedly these are not ‘public engagement’ exercises as such but the reports are publicly available and utilized by policymakers.) These are the two projects of most interest to me,

Public Safety in the Digital Age

Information and communications technologies have profoundly changed almost every aspect of life and business in the last two decades. While the digital revolution has brought about many positive changes, it has also created opportunities for criminal organizations and malicious actors to target individuals, businesses, and systems.

This assessment will examine promising practices that could help to address threats to public safety related to the use of digital technologies while respecting human rights and privacy.

Sponsor: Public Safety Canada

AI for Science and Engineering

The use of artificial intelligence (AI) and machine learning in science and engineering has the potential to radically transform the nature of scientific inquiry and discovery and produce a wide range of social and economic benefits for Canadians. But, the adoption of these technologies also presents a number of potential challenges and risks.

This assessment will examine the legal/regulatory, ethical, policy and social challenges related to the use of AI technologies in scientific research and discovery.

Sponsor: National Research Council Canada [NRC] (co-sponsors: CIFAR [Canadian Institute for Advanced Research], CIHR [Canadian Institutes of Health Research], NSERC [Natural Sciences and Engineering Research Council], and SSHRC [Social Sciences and Humanities Research Council])

For today’s posting the focus will be on the AI project, specifically, the April 19, 2021 CCA news release announcing the project’s expert panel,

The Council of Canadian Academies (CCA) has formed an Expert Panel to examine a broad range of factors related to the use of artificial intelligence (AI) technologies in scientific research and discovery in Canada. Teresa Scassa, SJD, Canada Research Chair in Information Law and Policy at the University of Ottawa, will serve as Chair of the Panel.  

“AI and machine learning may drastically change the fields of science and engineering by accelerating research and discovery,” said Dr. Scassa. “But these technologies also present challenges and risks. A better understanding of the implications of the use of AI in scientific research will help to inform decision-making in this area and I look forward to undertaking this assessment with my colleagues.”

As Chair, Dr. Scassa will lead a multidisciplinary group with extensive expertise in law, policy, ethics, philosophy, sociology, and AI technology. The Panel will answer the following question:

What are the legal/regulatory, ethical, policy and social challenges associated with deploying AI technologies to enable scientific/engineering research design and discovery in Canada?

“We’re delighted that Dr. Scassa, with her extensive experience in AI, the law and data governance, has taken on the role of Chair,” said Eric M. Meslin, PhD, FRSC, FCAHS, President and CEO of the CCA. “I anticipate the work of this outstanding panel will inform policy decisions about the development, regulation and adoption of AI technologies in scientific research, to the benefit of Canada.”

The CCA was asked by the National Research Council of Canada (NRC), along with co-sponsors CIFAR, CIHR, NSERC, and SSHRC, to address the question. More information can be found here.

The Expert Panel on AI for Science and Engineering:

Teresa Scassa (Chair), SJD, Canada Research Chair in Information Law and Policy, University of Ottawa, Faculty of Law (Ottawa, ON)

Julien Billot, CEO, Scale AI (Montreal, QC)

Wendy Hui Kyong Chun, Canada 150 Research Chair in New Media and Professor of Communication, Simon Fraser University (Burnaby, BC)

Marc Antoine Dilhac, Professor (Philosophy), University of Montreal; Director of Ethics and Politics, Centre for Ethics (Montréal, QC)

B. Courtney Doagoo, AI and Society Fellow, Centre for Law, Technology and Society, University of Ottawa; Senior Manager, Risk Consulting Practice, KPMG Canada (Ottawa, ON)

Abhishek Gupta, Founder and Principal Researcher, Montreal AI Ethics Institute (Montréal, QC)

Richard Isnor, Associate Vice President, Research and Graduate Studies, St. Francis Xavier University (Antigonish, NS)

Ross D. King, Professor, Chalmers University of Technology (Göteborg, Sweden)

Sabina Leonelli, Professor of Philosophy and History of Science, University of Exeter (Exeter, United Kingdom)

Raymond J. Spiteri, Professor, Department of Computer Science, University of Saskatchewan (Saskatoon, SK)

Who is the expert panel?

Putting together a Canadian panel is an interesting problem especially so when you’re trying to find people of expertise who can also represent various viewpoints both professionally and regionally. Then, there are gender, racial, linguistic, urban/rural, and ethnic considerations.

Statistics

Eight of the panelists could be said to be representing various regions of Canada. Five of those eight panelists are based in central Canada, specifically, Ontario (Ottawa) or Québec (Montréal). The sixth panelist is based in Atlantic Canada (Nova Scotia), the seventh panelist is based in the Prairies (Saskatchewan), and the eighth panelist is based in western Canada, (Vancouver, British Columbia).

The two panelists bringing an international perspective to this project are both based in Europe, specifically, Sweden and the UK.

(sigh) It would be good to have representation from another part of the world. Asia springs to mind as researchers in that region are very advanced in their AI research and applications meaning that their experts and ethicists are likely to have valuable insights.

Four of the ten panelists are women, which is closer to equal representation than some of the other CCA panels I’ve looked at.

As for Indigenous and BIPOC representation, unless one or more of the panelists chooses to self-identify in that fashion, I cannot make any comments. It should be noted that more than one expert panelist focuses on social justice and/or bias in algorithms.

Network of relationships

As you can see, the CCA descriptions for the individual members of the expert panel are a little brief. So, I did a little digging and In my searches, I noticed what seems to be a pattern of relationships among some of these experts. In particular, take note of the Canadian Institute for Advanced Research (CIFAR) and the AI Advisory Council of the Government of Canada.

Individual panelists

Teresa Scassa (Ontario) whose SJD designation signifies a research doctorate in law chairs this panel. Offhand, I can recall only one or two other panels being chaired by women of the 10 or so I’ve reviewed. In addition to her profile page at the University of Ottawa, she hosts her own blog featuring posts such as “How Might Bill C-11 Affect the Outcome of a Clearview AI-type Complaint?” She writes clearly (I didn’t seen any jargon) for an audience that is somewhat informed on the topic.

Along with Dilhac, Teresa Scassa is a member of the AI Advisory Council of the Government of Canada. More about that group when you read Dilhac’s description.

Julien Billot (Québec) has provided a profile on LinkedIn and you can augment your view of M. Billot with this profile from the CreativeDestructionLab (CDL),

Mr. Billot is a member of the faculty at HEC Montréal [graduate business school of the Université de Montréal] as an adjunct professor of management and the lead for the CreativeDestructionLab (CDL) and NextAi program in Montreal.

Julien Billot has been President and Chief Executive Officer of Yellow Pages Group Corporation (Y.TO) in Montreal, Quebec. Previously, he was Executive Vice President, Head of Media and Member of the Executive Committee of Solocal Group (formerly PagesJaunes Groupe), the publicly traded and incumbent local search business in France. Earlier experience includes serving as CEO of the digital and new business group of Lagardère Active, a multimedia branch of Lagardère Group and 13 years in senior management positions at France Telecom, notably as Chief Marketing Officer for Orange, the company’s mobile subsidiary.

Mr. Billot is a graduate of École Polytechnique (Paris) and from Telecom Paris Tech. He holds a postgraduate diploma (DEA) in Industrial Economics from the University of Paris-Dauphine.

Wendy Hui Kyong Chun (British Columbia) has a profile on the Simon Fraser University (SFU) website, which provided one of the more interesting (to me personally) biographies,

Wendy Hui Kyong Chun is the Canada 150 Research Chair in New Media at Simon Fraser University, and leads the Digital Democracies Institute which was launched in 2019. The Institute aims to integrate research in the humanities and data sciences to address questions of equality and social justice in order to combat the proliferation of online “echo chambers,” abusive language, discriminatory algorithms and mis/disinformation by fostering critical and creative user practices and alternative paradigms for connection. It has four distinct research streams all led by Dr. Chun: Beyond Verification which looks at authenticity and the spread of disinformation; From Hate to Agonism, focusing on fostering democratic exchange online; Desegregating Network Neighbourhoods, combatting homophily across platforms; and Discriminating Data: Neighbourhoods, Individuals and Proxies, investigating the centrality of race, gender, class and sexuality [emphasis mine] to big data and network analytics.

I’m glad to see someone who has focused on ” … the centrality of race, gender, class and sexuality to big data and network analytics.” Even more interesting to me was this from her CV (curriculum vitae),

Professor, Department of Modern Culture and Media, Brown University, July 2010-June 2018

.•Affiliated Faculty, Multimedia & Electronic Music Experiments (MEME), Department of Music,2017.

•Affiliated Faculty, History of Art and Architecture, March 2012-

.•Graduate Field Faculty, Theatre Arts and Performance Studies, Sept 2008-.[sic]

….

[all emphases mine]

And these are some of her credentials,

Ph.D., English, Princeton University, 1999.
•Certificate, School of Criticism and Theory, Dartmouth College, Summer 1995.

M.A., English, Princeton University, 1994.

B.A.Sc., Systems Design Engineering and English, University of Waterloo, Canada, 1992.
•first class honours and a Senate Commendation for Excellence for being the first student to graduate from the School of Engineering with a double major

It’s about time the CCA started integrating some of kind of arts perspective into their projects. (Although, I can’t help wondering if this was by accident rather than by design.)

Marc Antoine Dilhac, an associate professor at l’Université de Montréal, he, like Billot, graduated from a French university, in his case, the Sorbonne. Here’s more from Dilhac’s profile on the Mila website,

Marc-Antoine Dilhac (Ph.D., Paris 1 Panthéon-Sorbonne) is a professor of ethics and political philosophy at the Université de Montréal and an associate member of Mila – Quebec Artificial Intelligence Institute. He currently holds a CIFAR [Canadian Institute for Advanced Research] Chair in AI ethics (2019-2024), and was previously Canada Research Chair in Public Ethics and Political Theory 2014-2019. He specialized in theories of democracy and social justice, as well as in questions of applied ethics. He published two books on the politics of toleration and inclusion (2013, 2014). His current research focuses on the ethical and social impacts of AI and issues of governance and institutional design, with a particular emphasis on how new technologies are changing public relations and political structures.

In 2017, he instigated the project of the Montreal Declaration for a Responsible Development of AI and chaired its scientific committee. In 2020, as director of Algora Lab, he led an international deliberation process as part of UNESCO’s consultation on its recommendation on the ethics of AI.

In 2019, he founded Algora Lab, an interdisciplinary laboratory advancing research on the ethics of AI and developing a deliberative approach to the governance of AI and digital technologies. He is co-director of Deliberation at the Observatory on the social impacts of AI and digital technologies (OBVIA), and contributes to the OECD Policy Observatory (OECD.AI) as a member of its expert network ONE.AI.

He sits on the AI Advisory Council of the Government of Canada and co-chair its Working Group on Public Awareness.

Formerly known as Mila only, Mila – Quebec Artificial Intelligence Institute is a beneficiary of the 2017 Canadian federal budget’s inception of the Pan-Canadian Artificial Intelligence Strategy, which named CIFAR as an agency that would benefit as the hub and would also distribute funds for artificial intelligence research to (mainly) three agencies: Mila in Montréal, the Vector Institute in Toronto, and the Alberta Machine Intelligence Institute (AMII; Edmonton).

Consequently, Dilhac’s involvement with CIFAR is not unexpected but when added to his presence on the AI Advisory Council of the Government of Canada and his role as co-chair of its Working Group on Public Awareness, one of the co-sponsors for this future CCA report, you get a sense of just how small the Canadian AI ethics and public awareness community is.

Add in CIFAR’s Open Dialogue: AI in Canada series (ongoing until April 30, 2021) which is being held in partnership with the AI Advisory Council of the Government of Canada (see my March 29, 2021 posting for more details about the dialogues) amongst other familiar parties and you see a web of relations so tightly interwoven that if you could produce masks from it you’d have superior COVID-19 protection to N95 masks.

These kinds of connections are understandable and I have more to say about them in my final comments.

B. Courtney Doagoo has a profile page at the University of Ottawa, which fills in a few information gaps,

As a Fellow, Dr. Doagoo develops her research on the social, economic and cultural implications of AI with a particular focus on the role of laws, norms and policies [emphasis mine]. She also notably advises Dr. Florian Martin-Bariteau, CLTS Director, in the development of a new research initiative on those topical issues, and Dr. Jason Millar in the development of the Canadian Robotics and Artificial Intelligence Ethical Design Lab (CRAiEDL).

Dr. Doagoo completed her Ph.D. in Law at the University of Ottawa in 2017. In her interdisciplinary research, she used empirical methods to learn about and describe the use of intellectual property law and norms in creative communities. Following her doctoral research, she joined the World Intellectual Property Organization’s Coordination Office in New York as a legal intern and contributed to developing the joint initiative on gender and innovation in collaboration with UNESCO and UN Women. She later joined the International Law Research Program at the Centre for International Governance Innovation as a Post-Doctoral Fellow, where she conducted research in technology and law focusing on intellectual property law, artificial intelligence and data governance.

Dr. Doagoo completed her LL.L. at the University of Ottawa, and LL.M. in Intellectual Property Law at the Benjamin N. Cardozo School of Law [a law school at Yeshiva University in New York City].  In between her academic pursuits, Dr. Doagoo has been involved with different technology start-ups, including the one she is currently leading aimed at facilitating access to legal services. She’s also an avid lover of the arts and designed a course on Arts and Cultural Heritage Law taught during her doctoral studies at the University of Ottawa, Faculty of Law.

It’s probably because I don’t know enough but this “the role of laws, norms and policies” seems bland to the point of meaningless. The rest is more informative and brings it back to the arts with Wendy Hui Kyong Chun at SFU.

Doagoo’s LinkedIn profile offers an unexpected link to this expert panel’s chairperson, Teresa Scassa (in addition to both being lawyers whose specialties are in related fields and on faculty or fellow at the University of Ottawa),

Soft-funded Research Bursary

Dr. Teresa Scassa

2014

I’m not suggesting any conspiracies; it’s simply that this is a very small community with much of it located in central and eastern Canada and possible links into the US. For example, Wendy Hui Kyong Chun, prior to her SFU appointment in December 2018, worked and studied in the eastern US for over 25 years after starting her academic career at the University of Waterloo (Ontario).

Abhishek Gupta provided me with a challenging search. His LinkedIn profile yielded some details (I’m not convinced the man sleeps), Note: I have made some formatting changes and removed the location, ‘Montréal area’ from some descriptions

Experience

Microsoft Graphic
Software Engineer II – Machine Learning
Microsoft

Jul 2018 – Present – 2 years 10 months

Machine Learning – Commercial Software Engineering team

Serves on the CSE Responsible AI Board

Founder and Principal Researcher
Montreal AI Ethics Institute

May 2018 – Present – 3 years

Institute creating tangible and practical research in the ethical, safe and inclusive development of AI. For more information, please visit https://montrealethics.ai

Visiting AI Ethics Researcher, Future of Work, International Visitor Leadership Program
U.S. Department of State

Aug 2019 – Present – 1 year 9 months

Selected to represent Canada on the future of work

Responsible AI Lead, Data Advisory Council
Northwest Commission on Colleges and Universities

Jun 2020 – Present – 11 months

Faculty Associate, Frankfurt Big Data Lab
Goethe University

Mar 2020 – Present – 1 year 2 months

Advisor for the Z-inspection project

Associate Member
LF AI Foundation

May 2020 – Present – 1 year

Author
MIT Technology Review

Sep 2020 – Present – 8 months

Founding Editorial Board Member, AI and Ethics Journal
Springer Nature

Jul 2020 – Present – 10 months

Education

McGill University Bachelor of Science (BS)Computer Science

2012 – 2015

Exhausting, eh? He also has an eponymous website and the Montreal AI Ethics Institute can found here where Gupta and his colleagues are “Democratizing AI ethics literacy.” My hat’s off to Gupta getting on an expert panel for CCA is quite an achievement for someone without the usual academic and/or industry trappings.

Richard Isnor, based in Nova Scotia and associate vice president of research & graduate studies at St. Francis Xavier University (StFX), seems to have some connection to northern Canada (see the reference to Nunavut Research Institute below); he’s certainly well connected to various federal government agencies according to his profile page,

Prior to joining StFX, he was Manager of the Atlantic Regional Office for the Natural Sciences and Engineering Research Council of Canada (NSERC), based in Moncton, NB.  Previously, he was Director of Innovation Policy and Science at the International Development Research Centre in Ottawa and also worked for three years with the National Research Council of Canada [NRC] managing Biotechnology Research Initiatives and the NRC Genomics and Health Initiative.

Richard holds a D. Phil. in Science and Technology Policy Studies from the University of Sussex, UK; a Master’s in Environmental Studies from Dalhousie University [Nova Scotia]; and a B. Sc. (Hons) in Biochemistry from Mount Allison University [New Burnswick].  His primary interest is in science policy and the public administration of research; he has worked in science and technology policy or research administrative positions for Environment Canada, Natural Resources Canada, the Privy Council Office, as well as the Nunavut Research Institute. [emphasis mine]

I don’t know what Dr. Isnor’s work is like but I’m hopeful he (along with Spiteri) will be able to provide a less ‘big city’ perspective to the proceedings.

(For those unfamiliar with Canadian cities, Montreal [three expert panelists] is the second largest city in the country, Ottawa [two expert panelists] as the capital has an outsize view of itself, Vancouver [one expert panelist] is the third or fourth largest city in the country for a total of six big city representatives out of eight Canadian expert panelists.)

Ross D. King, professor of machine intelligence at Sweden’s Chalmers University of Technology, might be best known for Adam, also known as, Robot Scientist. Here’s more about King, from his Wikipedia entry (Note: Links have been removed),

King completed a Bachelor of Science degree in Microbiology at the University of Aberdeen in 1983 and went on to study for a Master of Science degree in Computer Science at the University of Newcastle in 1985. Following this, he completed a PhD at The Turing Institute [emphasis mine] at the University of Strathclyde in 1989[3] for work on developing machine learning methods for protein structure prediction.[7]

King’s research interests are in the automation of science, drug design, AI, machine learning and synthetic biology.[8][9] He is probably best known for the Robot Scientist[4][10][11][12][13][14][15][16][17] project which has created a robot that can:

hypothesize to explain observations

devise experiments to test these hypotheses

physically run the experiments using laboratory robotics

interpret the results from the experiments

repeat the cycle as required

The Robot Scientist Wikipedia entry has this to add,

… a laboratory robot created and developed by a group of scientists including Ross King, Kenneth Whelan, Ffion Jones, Philip Reiser, Christopher Bryant, Stephen Muggleton, Douglas Kell and Steve Oliver.[2][6][7][8][9][10]

… Adam became the first machine in history to have discovered new scientific knowledge independently of its human creators.[5][17][18]

Sabina Leonelli, professor of philosophy and history of science at the University of Exeter, is the only person for whom I found a Twitter feed (@SabinaLeonelli). Here’s a bit more from her Wikipedia entry Note: Links have been removed),

Originally from Italy, Leonelli moved to the UK for a BSc degree in History, Philosophy and Social Studies of Science at University College London and a MSc degree in History and Philosophy of Science at the London School of Economics. Her doctoral research was carried out in the Netherlands at the Vrije Universiteit Amsterdam with Henk W. de Regt and Hans Radder. Before joining the Exeter faculty, she was a research officer under Mary S. Morgan at the Department of Economic History of the London School of Economics.

Leonelli is the Co-Director of the Exeter Centre for the Study of the Life Sciences (Egenis)[3] and a Turing Fellow at the Alan Turing Institute [emphases mine] in London.[4] She is also Editor-in-Chief of the international journal History and Philosophy of the Life Sciences[5] and Associate Editor for the Harvard Data Science Review.[6] She serves as External Faculty for the Konrad Lorenz Institute for Evolution and Cognition Research.[7]

Notice that Ross King and Sabina Leonelli both have links to The Alan Turing Institute (“We believe data science and artificial intelligence will change the world”), although the institute’s link to the University of Strathclyde (Scotland) where King studied seems a bit tenuous.

Do check out Leonelli’s profile at the University of Exeter as it’s comprehensive.

Raymond J. Spiteri, professor and director of the Centre for High Performance Computing, Department of Computer Science at the University of Saskatchewan, has a profile page at the university the likes of which I haven’t seen in several years perhaps due to its 2013 origins. His other university profile page can best be described as minimalist.

His Canadian Applied and Industrial Mathematics Society (CAIMS) biography page could be described as less charming (to me) than the 2013 profile but it is easier to read,

Raymond Spiteri is a Professor in the Department of Computer Science at the University of Saskatchewan. He performed his graduate work as a member of the Institute for Applied Mathematics at the University of British Columbia. He was a post-doctoral fellow at McGill University and held faculty positions at Acadia University and Dalhousie University before joining USask in 2004. He serves on the Executive Committee of the WestGrid High-Performance Computing Consortium with Compute/Calcul Canada. He was a MITACS Project Leader from 2004-2012 and served in the role of Mitacs Regional Scientific Director for the Prairie Provinces between 2008 and 2011.

Spiteri’s areas of research are numerical analysis, scientific computing, and high-performance computing. His area of specialization is the analysis and implementation of efficient time-stepping methods for differential equations. He actively collaborates with scientists, engineers, and medical experts of all flavours. He also has a long record of industry collaboration with companies such as IBM and Boeing.

Spiteri has been lifetime member of CAIMS/SCMAI since 2000. He helped co-organize the 2004 Annual Meeting at Dalhousie and served on the Cecil Graham Doctoral Dissertation Award Committee from 2005 to 2009, acting as chair from 2007. He has been an active participant in CAIMS, serving several times on the Scientific Committee for the Annual Meeting, as well as frequently attending and organizing mini-symposia. Spiteri believes it is important for applied mathematics to play a major role in the efforts to meet Canada’s most pressing societal challenges, including the sustainability of our healthcare system, our natural resources, and the environment.

A last look at Spiteri’s 2013 profile gave me this (Note: Links have been removed),

Another biographical note: I obtained my B.Sc. degree in Applied Mathematics from the University of Western Ontario [also known as, Western University] in 1990. My advisor was Dr. M.A.H. (Paddy) Nerenberg, after whom the Nerenberg Lecture Series is named. Here is an excerpt from the description, put here is his honour, as a model for the rest of us:

The Nerenberg Lecture Series is first and foremost about people and ideas. Knowledge is the true treasure of humanity, accrued and passed down through the generations. Some of it, particularly science and its language, mathematics, is closed in practice to many because of technical barriers that can only be overcome at a high price. These technical barriers form part of the remarkable fractures that have formed in our legacy of knowledge. We are so used to those fractures that they have become almost invisible to us, but they are a source of profound confusion about what is known.

The Nerenberg Lecture is named after the late Morton (Paddy) Nerenberg, a much-loved professor and researcher born on 17 March– hence his nickname. He was a Professor at Western for more than a quarter century, and a founding member of the Department of Applied Mathematics there. A successful researcher and accomplished teacher, he believed in the unity of knowledge, that scientific and mathematical ideas belong to everyone, and that they are of human importance. He regretted that they had become inaccessible to so many, and anticipated serious consequences from it. [emphases mine] The series honors his appreciation for the democracy of ideas. He died in 1993 at the age of 57.

So, we have the expert panel.

Thoughts about the panel and the report

As I’ve noted previously here and elsewhere, assembling any panels whether they’re for a single event or for a longer term project such as producing a report is no easy task. Looking at the panel, there’s some arts representation, smaller urban centres are also represented, and some of the members have experience in more than one region in Canada. I was also much encouraged by Spiteri’s acknowledgement of his advisor’s, Morton (Paddy) Nerenberg, passionate commitment to the idea that “scientific and mathematical ideas belong to everyone.”

Kudos to the Council of Canadian Academies (CCA) organizers.

That said, this looks like an exceptionally Eurocentric panel. Unusually, there’s no representation from the US unless you count Chun who has spent the majority of her career in the US with only a little over two years at Simon Fraser University on Canada’s West Coast.

There’s weakness to a strategy (none of the ten or so CCA reports I’ve reviewed here deviates from this pattern) that seems to favour international participants from Europe and/or the US (also, sometimes, Australia/New Zealand). This leaves out giant chunks of the international community and brings us dangerously close to an echo chamber.

The same problem exists regionally and with various Canadian communities, which are acknowledged more in spirit than in actuality, e.g., the North, rural, indigenous, arts, etc.

Getting back to the ‘big city’ emphsais noted earlier, two people from Ottawa and three from Montreal; half of the expert panel lives within a two hour train ride of each other. (For those who don’t know, that’s close by Canadian standards. For comparison, a train ride from Vancouver to Seattle [US] is about four hours, a short trip when compared to a 24 hour train trip to the closest large Canadian cities.)

I appreciate that it’s not a simple problem but my concern is that it’s never acknowledged by the CCA. Perhaps they could include a section in the report acknowledging the issues and how the expert panel attempted to address them , in other words, transparency. Coincidentally, transparency, which has been related to trust, have both been identified as big issues with artificial intelligence.

As for solutions, these reports get sent to external reviewers and, prior to the report, outside experts are sometimes brought in as the panel readies itself. That would be two opportunities afforded by their current processes.

Anyway, good luck with the report and I look forward to seeing it.

Electronics begone! Enter: the light-based brainlike computing chip

At this point, it’s possible I’m wrong but I think this is the first ‘memristor’ type device (also called a neuromorphic chip) based on light rather than electronics that I’ve featured here on this blog. In other words, it’s not, technically speaking, a memristor but it does have the same properties so it is a neuromorphic chip.

Caption: The optical microchips that the researchers are working on developing are about the size of a one-cent piece. Credit: WWU Muenster – Peter Leßmann

A May 8, 2019 news item on Nanowerk announces this new approach to neuromorphic hardware (Note: A link has been removed),

Researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain.

The scientists produced a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses. The network is able to “learn” information and use this as a basis for computing and recognizing patterns. As the system functions solely with light and not with electrons, it can process data many times faster than traditional systems. …

A May 8, 2019 University of Münster press release (also on EurekAlert), which originated the news item, reveals the full story,

A technology that functions like a brain? In these times of artificial intelligence, this no longer seems so far-fetched – for example, when a mobile phone can recognise faces or languages. With more complex applications, however, computers still quickly come up against their own limitations. One of the reasons for this is that a computer traditionally has separate memory and processor units – the consequence of which is that all data have to be sent back and forth between the two. In this respect, the human brain is way ahead of even the most modern computers because it processes and stores information in the same place – in the synapses, or connections between neurons, of which there are a million-billion in the brain. An international team of researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have now succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain. The scientists managed to produce a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses.

The researchers were able to demonstrate, that such an optical neurosynaptic network is able to “learn” information and use this as a basis for computing and recognizing patterns – just as a brain can. As the system functions solely with light and not with traditional electrons, it can process data many times faster. “This integrated photonic system is an experimental milestone,” says Prof. Wolfram Pernice from Münster University and lead partner in the study. “The approach could be used later in many different fields for evaluating patterns in large quantities of data, for example in medical diagnoses.” The study is published in the latest issue of the “Nature” journal.

The story in detail – background and method used

Most of the existing approaches relating to so-called neuromorphic networks are based on electronics, whereas optical systems – in which photons, i.e. light particles, are used – are still in their infancy. The principle which the German and British scientists have now presented works as follows: optical waveguides that can transmit light and can be fabricated into optical microchips are integrated with so-called phase-change materials – which are already found today on storage media such as re-writable DVDs. These phase-change materials are characterised by the fact that they change their optical properties dramatically, depending on whether they are crystalline – when their atoms arrange themselves in a regular fashion – or amorphous – when their atoms organise themselves in an irregular fashion. This phase-change can be triggered by light if a laser heats the material up. “Because the material reacts so strongly, and changes its properties dramatically, it is highly suitable for imitating synapses and the transfer of impulses between two neurons,” says lead author Johannes Feldmann, who carried out many of the experiments as part of his PhD thesis at the Münster University.

In their study, the scientists succeeded for the first time in merging many nanostructured phase-change materials into one neurosynaptic network. The researchers developed a chip with four artificial neurons and a total of 60 synapses. The structure of the chip – consisting of different layers – was based on the so-called wavelength division multiplex technology, which is a process in which light is transmitted on different channels within the optical nanocircuit.

In order to test the extent to which the system is able to recognise patterns, the researchers “fed” it with information in the form of light pulses, using two different algorithms of machine learning. In this process, an artificial system “learns” from examples and can, ultimately, generalise them. In the case of the two algorithms used – both in so-called supervised and in unsupervised learning – the artificial network was ultimately able, on the basis of given light patterns, to recognise a pattern being sought – one of which was four consecutive letters.

“Our system has enabled us to take an important step towards creating computer hardware which behaves similarly to neurons and synapses in the brain and which is also able to work on real-world tasks,” says Wolfram Pernice. “By working with photons instead of electrons we can exploit to the full the known potential of optical technologies – not only in order to transfer data, as has been the case so far, but also in order to process and store them in one place,” adds co-author Prof. Harish Bhaskaran from the University of Oxford.

A very specific example is that with the aid of such hardware cancer cells could be identified automatically. Further work will need to be done, however, before such applications become reality. The researchers need to increase the number of artificial neurons and synapses and increase the depth of neural networks. This can be done, for example, with optical chips manufactured using silicon technology. “This step is to be taken in the EU joint project ‘Fun-COMP’ by using foundry processing for the production of nanochips,” says co-author and leader of the Fun-COMP project, Prof. C. David Wright from the University of Exeter.

Here’s a link to and a citation for the paper,

All-optical spiking neurosynaptic networks with self-learning capabilities by J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran & W. H. P. Pernice. Nature volume 569, pages208–214 (2019) DOI: https://doi.org/10.1038/s41586-019-1157-8 Issue Date: 09 May 2019

This paper is behind a paywall.

For the curious, I found a little more information about Fun-COMP (functionally-scaled computer technology). It’s a European Commission (EC) Horizon 2020 project coordinated through the University of Exeter. For information with details such as the total cost, contribution from the EC, the list of partnerships and more there is the Fun-COMP webpage on fabiodisconzi.com.

Graphene and smart textiles

Here’s one of the more recent efforts to create fibres that are electronic and capable of being woven into a smart textile. (Details about a previous effort can be found at the end of this post.) Now for this one, from a Dec. 3, 2018 news item on ScienceDaily,

The quest to create affordable, durable and mass-produced ‘smart textiles’ has been given fresh impetus through the use of the wonder material Graphene.

An international team of scientists, led by Professor Monica Craciun from the University of Exeter Engineering department, has pioneered a new technique to create fully electronic fibres that can be incorporated into the production of everyday clothing.

A Dec. 3, 2018 University of Exeter press release (also on EurekAlert), provides more detail about the problems associated with wearable electronics and the solution being offered (Note: A link has been removed),

Currently, wearable electronics are achieved by essentially gluing devices to fabrics, which can mean they are too rigid and susceptible to malfunctioning.

The new research instead integrates the electronic devices into the fabric of the material, by coating electronic fibres with light-weight, durable components that will allow images to be shown directly on the fabric.

The research team believe that the discovery could revolutionise the creation of wearable electronic devices for use in a range of every day applications, as well as health monitoring, such as heart rates and blood pressure, and medical diagnostics.

The international collaborative research, which includes experts from the Centre for Graphene Science at the University of Exeter, the Universities of Aveiro and Lisbon in Portugal, and CenTexBel in Belgium, is published in the scientific journal Flexible Electronics.

Professor Craciun, co-author of the research said: “For truly wearable electronic devices to be achieved, it is vital that the components are able to be incorporated within the material, and not simply added to it.

Dr Elias Torres Alonso, Research Scientist at Graphenea and former PhD student in Professor Craciun’s team at Exeter added “This new research opens up the gateway for smart textiles to play a pivotal role in so many fields in the not-too-distant future.  By weaving the graphene fibres into the fabric, we have created a new technique to all the full integration of electronics into textiles. The only limits from now are really within our own imagination.”

At just one atom thick, graphene is the thinnest substance capable of conducting electricity. It is very flexible and is one of the strongest known materials. The race has been on for scientists and engineers to adapt graphene for the use in wearable electronic devices in recent years.

This new research used existing polypropylene fibres – typically used in a host of commercial applications in the textile industry – to attach the new, graphene-based electronic fibres to create touch-sensor and light-emitting devices.

The new technique means that the fabrics can incorporate truly wearable displays without the need for electrodes, wires of additional materials.

Professor Saverio Russo, co-author and from the University of Exeter Physics department, added: “The incorporation of electronic devices on fabrics is something that scientists have tried to produce for a number of years, and is a truly game-changing advancement for modern technology.”

Dr Ana Neves, co-author and also from Exeter’s Engineering department added “The key to this new technique is that the textile fibres are flexible, comfortable and light, while being durable enough to cope with the demands of modern life.”

In 2015, an international team of scientists, including Professor Craciun, Professor Russo and Dr Ana Neves from the University of Exeter, have pioneered a new technique to embed transparent, flexible graphene electrodes into fibres commonly associated with the textile industry.

Here’s a link to and a citation for the paper,

Graphene electronic fibres with touch-sensing and light-emitting functionalities for smart textiles by Elias Torres Alonso, Daniela P. Rodrigues, Mukond Khetani, Dong-Wook Shin, Adolfo De Sanctis, Hugo Joulie, Isabel de Schrijver, Anna Baldycheva, Helena Alves, Ana I. S. Neves, Saverio Russo & Monica F. Craciun. Flexible Electronicsvolume 2, Article number: 25 (2018) DOI: https://doi.org/10.1038/s41528-018-0040-2 Published 25 September 2018

This paper is open access.

I have an earlier post about an effort to weave electronics into textiles for soldiers, from an April 5, 2012 posting,

I gather that today’s soldier (aka, warfighter)  is carrying as many batteries as weapons. Apparently, the average soldier carries a couple of kilos worth of batteries and cables to keep their various pieces of equipment operational. The UK’s Centre for Defence Enterprise (part of the Ministry of Defence) has announced that this situation is about to change as a consequence of a recently funded research project with a company called Intelligent Textiles. From Bob Yirka’s April 3, 2012 news item for physorg.com,

To get rid of the cables, a company called Intelligent Textiles has come up with a type of yarn that can conduct electricity, which can be woven directly into the fabric of the uniform. And because they allow the uniform itself to become one large conductive unit, the need for multiple batteries can be eliminated as well.

I dug down to find more information about this UK initiative and the Intelligent Textiles company but the trail seems to end in 2015. Still, I did find a Canadian connection (for those who don’t know I’m a Canuck) and more about Intelligent Textile’s work with the British military in this Sept. 21, 2015 article by Barry Collins for alphr.com (Note: Links have been removed),

A two-person firm operating from a small workshop in Staines-upon-Thames, Intelligent Textiles has recently landed a multimillion-pound deal with the US Department of Defense, and is working with the Ministry of Defence (MoD) to bring its potentially life-saving technology to British soldiers. Not bad for a company that only a few years ago was selling novelty cushions.

Intelligent Textiles was born in 2002, almost by accident. Asha Peta Thompson, an arts student at Central Saint Martins, had been using textiles to teach children with special needs. That work led to a research grant from Brunel University, where she was part of a team tasked with creating a “talking jacket” for the disabled. The garment was designed to help cerebral palsy sufferers to communicate, by pressing a button on the jacket to say “my name is Peter”, for example, instead of having a Stephen Hawking-like communicator in front of them.

Another member of that Brunel team was engineering lecturer Dr Stan Swallow, who was providing the electronics expertise for the project. Pretty soon, the pair realised the prototype waistcoat they were working on wasn’t going to work: it was cumbersome, stuffed with wires, and difficult to manufacture. “That’s when we had the idea that we could weave tiny mechanical switches into the surface of the fabric,” said Thompson.

The conductive weave had several advantages over packing electronics into garments. “It reduces the amount of cables,” said Thompson. “It can be worn and it’s also washable, so it’s more durable. It doesn’t break; it can be worn next to the skin; it’s soft. It has all the qualities of a piece of fabric, so it’s a way of repackaging the electronics in a way that’s more user-friendly and more comfortable.” The key to Intelligent Textiles’ product isn’t so much the nature of the raw materials used, but the way they’re woven together. “All our patents are in how we weave the fabric,” Thompson explained. “We weave two conductive yarns to make a tiny mechanical switch that is perfectly separated or perfectly connected. We can weave an electronic circuit board into the fabric itself.”

Intelligent Textiles’ big break into the military market came when they met a British textiles firm that was supplying camouflage gear to the Canadian armed forces. [emphasis mine] The firm was attending an exhibition in Canada and invited the Intelligent Textiles duo to join them. “We showed a heated glove and an iPod controller,” said Thompson. “The Canadians said ‘that’s really fantastic, but all we need is power. Do you think you could weave a piece of fabric that distributes power?’ We said, ‘we’re already doing it’.”Before long it wasn’t only power that the Canadians wanted transmitted through the fabric, but data.

“The problem a soldier faces at the moment is that he’s carrying 60 AA batteries [to power all the equipment he carries],” said Thompson. “He doesn’t know what state of charge those batteries are at, and they’re incredibly heavy. He also has wires and cables running around the system. He has snag hazards – when he’s going into a firefight, he can get caught on door handles and branches, so cables are a real no-no.”

The Canadians invited the pair to speak at a NATO conference, where they were approached by military brass with more familiar accents. “It was there that we were spotted by the British MoD, who said ‘wow, this is a British technology but you’re being funded by Canada’,” said Thompson. That led to £235,000 of funding from the Centre for Defence Enterprise (CDE) – the money they needed to develop a fabric wiring system that runs all the way through the soldier’s vest, helmet and backpack.

There are more details about the 2015 state of affairs, textiles-wise, in a March 11, 2015 article by Richard Trenholm for CNET.com (Note: A link has been removed),

Speaking at the Wearable Technology Show here, Swallow describes IT [Intelligent Textiles]L as a textile company that “pretends to be a military company…it’s funny how you slip into these domains.”

One domain where this high-tech fabric has seen frontline action is in the Canadian military’s IAV Stryker armoured personnel carrier. ITL developed a full QWERTY keyboard in a single piece of fabric for use in the Stryker, replacing a traditional hardware keyboard that involved 100 components. Multiple components allow for repair, but ITL knits in redundancy so the fabric can “degrade gracefully”. The keyboard works the same as the traditional hardware, with the bonus that it’s less likely to fall on a soldier’s head, and with just one glaring downside: troops can no longer use it as a step for getting in and out of the vehicle.

An armoured car with knitted controls is one thing, but where the technology comes into its own is when used about the person. ITL has worked on vests like the JTAC, a system “for the guys who call down airstrikes” and need “extra computing oomph.” Then there’s SWIPES, a part of the US military’s Nett Warrior system — which uses a chest-mounted Samsung Galaxy Note 2 smartphone — and British military company BAE’s Broadsword system.

ITL is currently working on Spirit, a “truly wearable system” for the US Army and United States Marine Corps. It’s designed to be modular, scalable, intuitive and invisible.

While this isn’t an ITL product, this video about Broadsword technology from BAE does give you some idea of what wearable technology for soldiers is like,

baesystemsinc

Uploaded on Jul 8, 2014

Broadsword™ delivers groundbreaking technology to the 21st Century warfighter through interconnecting components that inductively transfer power and data via The Spine™, a revolutionary e-textile that can be inserted into any garment. This next-generation soldier system offers enhanced situational awareness when used with the BAE Systems’ Q-Warrior® see-through display.

If anyone should have the latest news about Intelligent Textile’s efforts, please do share in the comments section.

I do have one other posting about textiles and the military, which is dated May 9, 2012, but while it does reference US efforts it is not directly related to weaving electronics into solder’s (warfighter’s) gear.

You can find CenTexBel (Belgian Textile Rsearch Centre) here and Graphenea here. Both are mentioned in the University of Exeter press release.

Colo(u)r-changing bandage for better compression

This is a structural colo(u)r story, from a May 29, 2018 news item on Nanowerk,

Compression therapy is a standard form of treatment for patients who suffer from venous ulcers and other conditions in which veins struggle to return blood from the lower extremities. Compression stockings and bandages, wrapped tightly around the affected limb, can help to stimulate blood flow. But there is currently no clear way to gauge whether a bandage is applying an optimal pressure for a given condition.

Now engineers at MIT {Massachusetts Institute of Technology] have developed pressure-sensing photonic fibers that they have woven into a typical compression bandage. As the bandage is stretched, the fibers change color. Using a color chart, a caregiver can stretch a bandage until it matches the color for a desired pressure, before, say, wrapping it around a patient’s leg.

The photonic fibers can then serve as a continuous pressure sensor — if their color changes, caregivers or patients can use the color chart to determine whether and to what degree the bandage needs loosening or tightening.

A May 29, 2018 MIT news release (also on EurekAlert), which originated the news item, provides more detail,

“Getting the pressure right is critical in treating many medical conditions including venous ulcers, which affect several hundred thousand patients in the U.S. each year,” says Mathias Kolle, assistant professor of mechanical engineering at MIT. “These fibers can provide information about the pressure that the bandage exerts. We can design them so that for a specific desired pressure, the fibers reflect an easily distinguished color.”

Kolle and his colleagues have published their results in the journal Advanced Healthcare Materials. Co-authors from MIT include first author Joseph Sandt, Marie Moudio, and Christian Argenti, along with J. Kenji Clark of the Univeristy of Tokyo, James Hardin of the United States Air Force Research Laboratory, Matthew Carty of Brigham and Women’s Hospital-Harvard Medical School, and Jennifer Lewis of Harvard University.

Natural inspiration

The color of the photonic fibers arises not from any intrinsic pigmentation, but from their carefully designed structural configuration. Each fiber is about 10 times the diameter of a human hair. The researchers fabricated the fiber from ultrathin layers of transparent rubber materials, which they rolled up to create a jelly-roll-type structure. Each layer within the roll is only a few hundred nanometers thick.

In this rolled-up configuration, light reflects off each interface between individual layers. With enough layers of consistent thickness, these reflections interact to strengthen some colors in the visible spectrum, for instance red, while diminishing the brightness of other colors. This makes the fiber appear a certain color, depending on the thickness of the layers within the fiber.

“Structural color is really neat, because you can get brighter, stronger colors than with inks or dyes just by using particular arrangements of transparent materials,” Sandt says. “These colors persist as long as the structure is maintained.”

The fibers’ design relies upon an optical phenomenon known as “interference,” in which light, reflected from a periodic stack of thin, transparent layers, can produce vibrant colors that depend on the stack’s geometric parameters and material composition. Optical interference is what produces colorful swirls in oily puddles and soap bubbles. It’s also what gives peacocks and butterflies their dazzling, shifting shades, as their feathers and wings are made from similarly periodic structures.

“My interest has always been in taking interesting structural elements that lie at the origin of nature’s most dazzling light manipulation strategies, to try recreating and employing them in useful applications,” Kolle says.

A multilayered approach

The team’s approach combines known optical design concepts with soft materials, to create dynamic photonic materials.

While a postdoc at Harvard in the group of Professor Joanna Aizenberg, Kolle was inspired by the work of Pete Vukusic, professor of biophotonics at the University of Exeter in the U.K., on Margaritaria nobilis, a tropical plant that produces extremely shiny blue berries. The fruits’ skin is made up of cells with a periodic cellulose structure, through which light can reflect to give the fruit its signature metallic blue color.

Together, Kolle and Vukusic sought ways to translate the fruit’s photonic architecture into a useful synthetic material. Ultimately, they fashioned multilayered fibers from stretchable materials, and assumed that stretching the fibers would change the individual layers’ thicknesses, enabling them to tune the fibers’ color. The results of these first efforts were published in Advanced Materials in 2013.

When Kolle joined the MIT faculty in the same year, he and his group, including Sandt, improved on the photonic fiber’s design and fabrication. In their current form, the fibers are made from layers of commonly used and widely available transparent rubbers, wrapped around highly stretchable fiber cores. Sandt fabricated each layer using spin-coating, a technique in which a rubber, dissolved into solution, is poured onto a spinning wheel. Excess material is flung off the wheel, leaving a thin, uniform coating, the thickness of which can be determined by the wheel’s speed.

For fiber fabrication, Sandt formed these two layers on top of a water-soluble film on a silicon wafer. He then submerged the wafer, with all three layers, in water to dissolve the water-soluble layer, leaving the two rubbery layers floating on the water’s surface. Finally, he carefully rolled the two transparent layers around a black rubber fiber, to produce the final colorful photonic fiber.

Reflecting pressure

The team can tune the thickness of the fibers’ layers to produce any desired color tuning, using standard optical modeling approaches customized for their fiber design.

“If you want a fiber to go from yellow to green, or blue, we can say, ‘This is how we have to lay out the fiber to give us this kind of [color] trajectory,'” Kolle says. “This is powerful because you might want to have something that reflects red to show a dangerously high strain, or green for ‘ok.’ We have that capacity.”

The team fabricated color-changing fibers with a tailored, strain-dependent color variation using the theoretical model, and then stitched them along the length of a conventional compression bandage, which they previously characterized to determine the pressure that the bandage generates when it’s stretched by a certain amount.

The team used the relationship between bandage stretch and pressure, and the correlation between fiber color and strain, to draw up a color chart, matching a fiber’s color (produced by a certain amount of stretching) to the pressure that is generated by the bandage.

To test the bandage’s effectiveness, Sandt and Moudio enlisted over a dozen student volunteers, who worked in pairs to apply three different compression bandages to each other’s legs: a plain bandage, a bandage threaded with photonic fibers, and a commercially-available bandage printed with rectangular patterns. This bandage is designed so that when it is applying an optimal pressure, users should see that the rectangles become squares.

Overall, the bandage woven with photonic fibers gave the clearest pressure feedback. Students were able to interpret the color of the fibers, and based on the color chart, apply a corresponding optimal pressure more accurately than either of the other bandages.

The researchers are now looking for ways to scale up the fiber fabrication process. Currently, they are able to make fibers that are several inches long. Ideally, they would like to produce meters or even kilometers of such fibers at a time.

“Currently, the fibers are costly, mostly because of the labor that goes into making them,” Kolle says. “The materials themselves are not worth much. If we could reel out kilometers of these fibers with relatively little work, then they would be dirt cheap.”

Then, such fibers could be threaded into bandages, along with textiles such as athletic apparel and shoes as color indicators for, say, muscle strain during workouts. Kolle envisions that they may also be used as remotely readable strain gauges for infrastructure and machinery.

“Of course, they could also be a scientific tool that could be used in a broader context, which we want to explore,” Kolle says.

Here’s what the bandage looks like,

Caption: Engineers at MIT have developed pressure-sensing photonic fibers that they have woven into a typical compression bandage. Credit Courtesy of the researchers

Here’s a link to and a citation for the paper,

Stretchable Optomechanical Fiber Sensors for Pressure Determination in Compressive Medical Textiles by Joseph D. Sandt, Marie Moudio, J. Kenji Clark, James Hardin, Christian Argenti, Matthew Carty, Jennifer A. Lewis, Mathias Kolle. Advanced Healthcare Materials https://doi.org/10.1002/adhm.201800293 First published: 29 May 2018

This paper is behind a paywall.

‘Green’ concrete with graphene

It’s thrilling and I hope they are able to commercialize this technology which makes concrete ‘greener’. From an April 23, 2018 news item on ScienceDaily,

A new greener, stronger and more durable concrete that is made using the wonder-material graphene could revolutionise the construction industry.

Experts from the University of Exeter [UK] have developed a pioneering new technique that uses nanoengineering technology to incorporate graphene into traditional concrete production.

The new composite material, which is more than twice as strong and four times more water resistant than existing concretes, can be used directly by the construction industry on building sites. All of the concrete samples tested are according to British and European standards for construction.

Crucially, the new graphene-reinforced concentre material also drastically reduced the carbon footprint of conventional concrete production methods, making it more sustainable and environmentally friendly.

The research team insist the new technique could pave the way for other nanomaterials to be incorporated into concrete, and so further modernise the construction industry worldwide.

I love the image they’ve included with the press materials (if they hadn’t told me I wouldn’t know that this is the ‘new’ concrete; to me, it looks just like the other stuff),

Caption: The new concrete developed using graphene by experts from the University of Exeter (credit: Dimitar Dimov / University of Exeter) Credit: Dimitar Dimov / University of Exeter

An April 23, 2018 University of Exeter press release (also on EurekAlert), which originated the news item,  provides more details about the work, future applications, and its potential impact,

Professor Monica Craciun, co-author of the paper and from Exeter’s engineering department, said: “Our cities face a growing pressure from global challenges on pollution, sustainable urbanization and resilience to catastrophic natural events, amongst others.

“This new composite material is an absolute game-changer in terms of reinforcing traditional concrete to meets these needs. Not only is it stronger and more durable, but it is also more resistant to water, making it uniquely suitable for construction in areas which require maintenance work and are difficult to be accessed .

“Yet perhaps more importantly, by including graphene we can reduce the amount of materials required to make concrete by around 50 per cent — leading to a significant reduction of 446kg/tonne of the carbon emissions.

“This unprecedented range of functionalities and properties uncovered are an important step in encouraging a more sustainable, environmentally-friendly construction industry worldwide.”

Previous work on using nanotechnology has concentrated on modifying existing components of cement, one of the main elements of concrete production.

In the innovative new study, the research team has created a new technique that centres on suspending atomically thin graphene in water with high yield and no defects, low cost and compatible with modern, large scale manufacturing requirements.

Dimitar Dimov, the lead author and also from the University of Exeter added: “This ground-breaking research is important as it can be applied to large-scale manufacturing and construction. The industry has to be modernised by incorporating not only off-site manufacturing, but innovative new materials as well.

“Finding greener ways to build is a crucial step forward in reducing carbon emissions around the world and so help protect our environment as much as possible. It is the first step, but a crucial step in the right direction to make a more sustainable construction industry for the future.”

Here’s a link to and a citation for the paper,

Ultrahigh Performance Nanoengineered Graphene–Concrete Composites for Multifunctional Applications by Dimitar Dimov, Iddo Amit, Olivier Gorrie, Matthew D. Barnes, Nicola J. Townsend, Ana I. S. Neves, Freddie Withers, Saverio Russo, and Monica Felicia Craciun. Advanced Functional Materials https://doi.org/10.1002/adfm.201705183 First published: 23 April 2018

This paper is open access.