Glass sponge reefs: ‘living dinosaurs’ of the Pacific Northwest waters

Glass sponges in Howe Sound. Credit: Adam Taylor, MLSS [Marine Life Sanctuaries Society]

One of them looks to be screaming (Edvard Munch, anyone?) and none of it looks how I imagined an oceanic ‘living dinosaur’ might. While the news is not in my main area of interest (emerging technology), it is close to home. A June 1, 2020 University of British Columbia news release (also on EurekAlert) describes the glass sponge reefs (living dinosaurs) in the Pacific Northwest and current concerns about their welfare,

Warming ocean temperatures and acidification drastically reduce the skeletal strength and filter-feeding capacity of glass sponges, according to new UBC research.

The findings, published in Scientific Reports, indicate that ongoing climate change could have serious, irreversible impacts on the sprawling glass sponge reefs of the Pacific Northwest and their associated marine life – the only known reefs of their kind in the world.

Ranging from the Alaska-Canada border and down through the Strait of Georgia, the reefs play an essential role in water quality by filtering microbes and cycling nutrients through food chains. They also provide critical habitat for many fish and invertebrates, including rockfish, spot prawns, herring, halibut and sharks.

“Glass sponge reefs are ‘living dinosaurs’ thought to have been extinct for 40 million years before they were re-discovered in B.C. in 1986,” said Angela Stevenson, who led the study as a postdoctoral fellow at UBC Zoology. “Their sheer size and tremendous filtration capacity put them at the heart of a lush and productive underwater system, so we wanted to examine how climate change might impact their survival.”

Although the reefs are subject to strong, ongoing conservation efforts focused on limiting damage to their delicate glass structures, scientists know little about how these sponges respond to environmental changes.

For the study, Stevenson harvested Aphrocallistes vastus, one of three types of reef-building glass sponges, from Howe Sound and brought them to UBC where she ran the first successful long-term lab experiment involving live sponges by simulating their natural environment as closely as possible.

She then tested their resilience by placing them in warmer and more acidic waters that mimicked future projected ocean conditions.

Over a period of four months, Stevenson measured changes to their pumping capacity, body condition and skeletal strength, which are critical indicators of their ability to feed and build reefs.

Within one month, ocean acidification and warming, alone and in combination, reduced the sponges’ pumping capacity by more than 50 per cent and caused tissue losses of 10 to 25 per cent, which could starve the sponges.

“Most worryingly, pumping began to slow within two weeks of exposure to elevated temperatures,” said Stevenson.

The combination of acidification and warming also made their bodies weaker and more elastic by half. That could curtail reef formation and cause brittle reefs to collapse under the weight of growing sponges or animals walking and swimming among them.

Year-long temperature data collected from Howe Sound reefs in 2016 suggest it’s only a matter of time before sponges are exposed to conditions which exceed these thresholds.

“In Howe Sound, we want to figure out a way to track changes in sponge growth, size and area and area in the field so we can better understand potential climate implications at a larger scale,” said co-author Jeff Marliave, senior research scientist at the Ocean Wise Research Institute. “We also want to understand the microbial food webs that support sponges and how they might be influenced by climate cycles.”

Stevenson credits bottom-up community-led efforts and strong collaborations with government for the healthy, viable state of the B.C. reefs today. Added support for such community efforts and educational programs will be key to relieving future pressures.

“When most people think about reefs, they think of tropical shallow-water reefs like the beautiful Great Barrier Reef in Australia,” added Stevenson. “But we have these incredible deep-water reefs in our own backyard in Canada. If we don’t do our best to stand up for them, it will be like discovering a herd of dinosaurs and then immediately dropping dynamite on them.”

Background:

The colossal reefs can grow to 19 metres in height and are built by larval sponges settling atop the fused dead skeletons of previous generations. In northern B.C. the reefs are found at depths of 90 to 300 metres, while in southern B.C., they can be found as shallow as 22 metres.

The sponges feed by pumping sea water through their delicate bodies, filtering almost 80 per cent of microbes and particles and expelling clean water.

It’s estimated that the 19 known reefs in the Salish Sea can filter 100 billion litres of water every day, equivalent to one per cent of the total water volume in the Strait of Georgia and Howe Sound combined.

Here’s a link to and a citation for the paper,

Warming and acidification threaten glass sponge Aphrocallistes vastus pumping and reef formation by A. Stevenson, S. K. Archer, J. A. Schultz, A. Dunham, J. B. Marliave, P. Martone & C. D. G. Harley. Scientific Reports volume 10, Article number: 8176 (2020) DOI: https://doi.org/10.1038/s41598-020-65220-9 Published 18 May 2020

This paper is open access.

Almost finally, there’s a brief video of the glass sponges in their habitat,

Circling back to Edvard Munch,

Courtesy of www.EdvardMunch.org [downloaded from https://www.edvardmunch.org/the-scream.jsp]

Here’s more about the painting, from The Scream webpage on edvardmunch.org,

Munch’s The Scream is an icon of modern art, the Mona Lisa for our time. As Leonardo da Vinci evoked a Renaissance ideal of serenity and self-control, Munch defined how we see our own age – wracked with anxiety and uncertainty.

Essentially The Scream is autobiographical, an expressionistic construction based on Munch’s actual experience of a scream piercing through nature while on a walk, after his two companions, seen in the background, had left him. …

For all the times I’ve seen the image, I had no idea the inspiration was acoustic.

In any event, the image seems sadly à propos both for the glass sponge reefs (and nature generally) and with regard to Black Lives Matter (BLM). A worldwide conflagration was ignited by George Floyd’s death in Minneapolis on May 25, 2020. This African-American man died while saying, “I can’t breathe,” as a police officer held Floyd down with a knee on his neck. RIP (rest in peace) George Floyd while the rest of us make the changes necessary, no matter how difficult to create a just and respectful world for all. Black Lives Matter.

Nanodevices show (from the inside) how cells change

Embryo cells + nanodevices from University of Bath on Vimeo.

Caption: Five mouse embryos, each containing a nanodevice that is 22-millionths of a metre long. The film begins when the embryos are 2-hours old and continues for 5 hours. Each embryo is about 100-millionths of a metre in diameter. Credit: Professor Tony Perry

Fascinating, yes? As I often watch before reading the caption, these were mysterious grey blobs moving around was my first impression. Given the headline for the May 26, 2020 news item on ScienceDaily, I was expecting the squarish-shaped devices inside,

For the first time, scientists have introduced minuscule tracking devices directly into the interior of mammalian cells, giving an unprecedented peek into the processes that govern the beginning of development.

This work on one-cell embryos is set to shift our understanding of the mechanisms that underpin cellular behaviour in general, and may ultimately provide insights into what goes wrong in ageing and disease.

The research, led by Professor Tony Perry from the Department of Biology and Biochemistry at the University of Bath [UK], involved injecting a silicon-based nanodevice together with sperm into the egg cell of a mouse. The result was a healthy, fertilised egg containing a tracking device.

This image looks to have been enhanced with colour,

Fluorescence of an embryo containing a nanodevice. Courtesy: University of Bath

A May 25, 2020 University of Bath press release (also on EurekAlert but published May 26, 2020)

The tiny devices are a little like spiders, complete with eight highly flexible ‘legs’. The legs measure the ‘pulling and pushing’ forces exerted in the cell interior to a very high level of precision, thereby revealing the cellular forces at play and showing how intracellular matter rearranged itself over time.

The nanodevices are incredibly thin – similar to some of the cell’s structural components, and measuring 22 nanometres, making them approximately 100,000 times thinner than a pound coin. This means they have the flexibility to register the movement of the cell’s cytoplasm as the one-cell embryo embarks on its voyage towards becoming a two-cell embryo.

“This is the first glimpse of the physics of any cell on this scale from within,” said Professor Perry. “It’s the first time anyone has seen from the inside how cell material moves around and organises itself.”

WHY PROBE A CELL’S MECHANICAL BEHAVIOUR?

The activity within a cell determines how that cell functions, explains Professor Perry. “The behaviour of intracellular matter is probably as influential to cell behaviour as gene expression,” he said. Until now, however, this complex dance of cellular material has remained largely unstudied. As a result, scientists have been able to identify the elements that make up a cell, but not how the cell interior behaves as a whole.

“From studies in biology and embryology, we know about certain molecules and cellular phenomena, and we have woven this information into a reductionist narrative of how things work, but now this narrative is changing,” said Professor Perry. The narrative was written largely by biologists, who brought with them the questions and tools of biology. What was missing was physics. Physics asks about the forces driving a cell’s behaviour, and provides a top-down approach to finding the answer.

“We can now look at the cell as a whole, not just the nuts and bolts that make it.”

Mouse embryos were chosen for the study because of their relatively large size (they measure 100 microns, or 100-millionths of a metre, in diameter, compared to a regular cell which is only 10 microns [10-millionths of a metre] in diameter). This meant that inside each embryo, there was space for a tracking device.

The researchers made their measurements by examining video recordings taken through a microscope as the embryo developed. “Sometimes the devices were pitched and twisted by forces that were even greater than those inside muscle cells,” said Professor Perry. “At other times, the devices moved very little, showing the cell interior had become calm. There was nothing random about these processes – from the moment you have a one-cell embryo, everything is done in a predictable way. The physics is programmed.”

The results add to an emerging picture of biology that suggests material inside a living cell is not static, but instead changes its properties in a pre-ordained way as the cell performs its function or responds to the environment. The work may one day have implications for our understanding of how cells age or stop working as they should, which is what happens in disease.

The study is published this week in Nature Materials and involved a trans-disciplinary partnership between biologists, materials scientists and physicists based in the UK, Spain and the USA.

Here’s a link to and a citation for the paper,

Tracking intracellular forces and mechanical property changes in mouse one-cell embryo development by Marta Duch, Núria Torras, Maki Asami, Toru Suzuki, María Isabel Arjona, Rodrigo Gómez-Martínez, Matthew D. VerMilyea, Robert Castilla, José Antonio Plaza & Anthony C. F. Perry. Nature Materials (2020) DOI: https://doi.org/10.1038/s41563-020-0685-9 Published 25 May 2020

This paper is behind a paywall.

Implanted biosensors could help sports professionals spy on themselves

A May 21, 2020 news item on Nanowerk describes the latest in sports self-monitoring research (or as I like to think of it, spying on yourself),

Researchers from the University of Surrey have revealed their new biodegradable motion sensor – paving the way for implanted nanotechnology that could help future sports professionals better monitor their movements to aid rapid improvements, or help caregivers remotely monitor people living with dementia.

A May 21, 12020 University of Surrey press release (also on EurekAlert), which originated the news item, mentioned the collaboration with a South Korean University and provides a few details about this work,

In a paper published by Nano Energy, a team from Surrey’s Advanced Technology Institute (ATI), in partnership with Kyung Hee University in South Korea, detail how they developed a nano-biomedical motion sensor which can be paired with AI systems to recognise movements of distinct body parts.

The ATI’s technology builds on its previous work around triboelectric nanogenerators (TENG), where researchers used the technology to harness human movements and generate small amounts of electrical energy. Combining the two means self-powered sensors are possible without the need for chemical or wired power sources.

In their new research, the team from the ATI developed a flexible, biodegradable and long-lasting TENG from silk cocoon waste. They used a new alcohol treatment technique, which leads to greater durability for the device, even under harsh or humid environments.

Dr. Bhaskar Dudem, project lead and Research Fellow at the ATI, said: “We are excited to show the world the immense potential of our durable, silk film based nanogenerator. It’s ability to work in severe environments while being able to generate electricity and monitor human movements positions our TENG in a class of its own when it comes to the technology.”

Professor Ravi Silva, Director of the ATI, said: “We are proud of Dr Dudem’s work which is helping the ATI lead the way in developing wearable, flexible, and biocompatible TENGs that efficiently harvest environmental energies. If we are to live in a future where autonomous sensing and detecting of pathogens is important, the ability to create both self-powered and wireless biosensors linked to AI is a significant boost.”

Here’s a link to and a citation for the paper,

Exploring theoretical and experimental optimization towards high-performance triboelectric nanogenerators using microarchitecture silk cocoon films by Bhaskar Dudem, R.D. Ishara G. Dharmasena, Sontyana Adonijah Graham, Jung Woo Leem, Harishkumarreddy Patnam, Anki Reddy Mule, S. Ravi P. Silva, Jae Su Yu. Nano Energy DOI: https://doi.org/10.1016/j.nanoen.2020.104882 Available online 11 May 2020, 104882

This paper is behind a paywall.

Low cost science tools and the ‘Thing Tank’

The Woodrow Wilson International Center for Scholars (or Wilson Center; located in Washington, DC) has a new initiative, the ‘Thing Tank’ (am enjoying the word play). It’s all about low cost science tools and their possible impact on the practice of science. Here’s more from a May 27, 2020 email notice,

From a foldable microscope made primarily from paper, to low cost and open microprocessors supporting research from cognitive neuroscience to oceanography, to low cost sensors measuring air quality in communities around the world, the things of science — that is, the physical tools that generate data or contribute to scientific processes — are changing the way that science happens.

The nature of tool design is changing, as more and more people share designs openly, create do-it-yourself (DIY) tools as a substitute for expensive, proprietary equipment, or design for mass production. The nature of tool access and use is changing too, as more tools become available at a price point that is do-able for non-professionals. This may be breaking down our reliance on expensive, proprietary designs traditionally needed to make scientific progress. This may also be building new audiences for tools, and making science more accessible to those traditionally limited by cost, geography, or infrastructure. But questions remain: will low cost and/or open tools become ubiquitous, replacing expensive, proprietary designs? Will the use of these tools fundamentally change how we generate data and knowledge, and apply it to global problems? Will the result be more, and better, science? And if so, what is standing in the way of widespread adoption and use?

In the Science and Technology Innovation Program at the Wilson Center, we often consider how new approaches to science are changing the way that science happens. Over the last five years, we’ve investigated how emerging enthusiasm in citizen science — the involvement of the public in scientific research — has changed the way that the public sees science, and contributes to data-driven decision-making. We have explored crowdsourcing and citizen science as two important paradigms of interest within and beyond US federal agencies, and investigated associated legal issues. We’ve documented how innovations in open science, especially open and FAIR data, can make information more shareable and impactful. Across our efforts, we explore and evaluate emerging technology and governance models with the goal of understanding how to maximize benefit and minimize risk. In the process, we convene scientists, practitioners, and policy makers to maximize the value of new approaches to science.

Now, we are expanding our attention to explore how innovation in the physical tools of science accelerate science, support decision-making, and broaden participation. We want to understand the current and potential value of these tools and approaches, and how they are changing the way we do science — now, and in the future.

THING Tank, our new initiative, fits well within the overall mission of the Wilson Center. As a think tank associated with the United States federal government, the Wilson Center is a boundary organization linking academia and the public policy community to create actionable research while bringing stakeholders together. Innovative and accessible tools for science are important to academia and policy alike. We hope to also bridge these perspectives with critical, on the ground activities, and understand and elevate the individuals, non-profits, community groups, and others working in this space.

The notice was in fact an excerpt from a May 19, 2020 article by Alison Parker and Anne Bowser on the Wilson Center website, I believe Bowser and Parker are the organizers behind the Think Tank initiative.

There are big plans for future activities such as workshops, a member directory and other outreach efforts. There’s also this,

We want to hear from you!

This space touches many communities, networks and stakeholders, from those advancing science, those working together to promote ideals of openness, to those developing solutions in a commercial context. No matter your interest, we want to hear from you! We’re looking for contributions to this effort, that can take a variety of forms:

  • Help us catch up to speed. We recognize that there are decades of foundational work and ongoing activities, and are eager to learn more.
  • Help us connect to broader communities, networks, and stakeholders. What is the best way to get broad input?  Who isn’t in our network, that should be?
  • Introduce your communities and stakeholders to public policy audiences by contributing blog posts and social media messaging – more information on this coming soon! 
  • Explore converging communities and accelerators and barriers by participating in workshops and events – definitely virtually, and hopefully in person as well. 
  • Contribute and review content about case studies, definitions, and accelerators and barriers.
  • Share our products with your networks if you think they are useful.

To start, we will host a series of virtual happy hours exploring the role of openness, authority, and community in open science and innovation for crisis and disaster response. How have tools for science impacted the response to COVID-19, and how is the governance of those devices, and their data, evolving in emergency use?

How one is to contact the organizers is not immediately clear to me. They’ve not included any contact details on that webpage but you can subscribe to the newsletter,

Stay informed. Join our THING Tank email list to get updates about our work in low cost hardware.

This is very exciting news and I hope to hear more about the initiative as it proceeds.

They all fall down or not? Quantum dot-doped nanoparticles for preserving national monuments and buildings

The most recent post here but not the most recent research about preserving stone monuments and buildings is a December 23, 2019 piece titled: Good for your bones and good for art conservation: calcium. Spanish researchers (who seem particularly active in this research niche) are investigating a more refined approach to preserving stone monuments with calcium according to a May 8, 2020 news item on Nanowerk,

The fluorescence emitted by tiny zinc oxide quantum dots can be used to determine the penetration depth of certain substances used in the restoration of historical buildings. Researchers from Pablo de Olavide University (Spain) have tested this with samples collected from historical quarries in Cadiz, where the stone was used to build the city hall and cathedral of Seville.

One of the main problems in the preservation of historic buildings is the loss of cohesion of their building materials. Restorers use consolidating substances to make them more resistant, such as lime (calcium hydroxide), which has long been used because of its great durability and high compatibility with the carbonate stone substrate.

Now, researchers at Pablo de Olavide University, in Seville, have developed and patented calcium hydroxide nanoparticles doped with quantum dots that are more effective as consolidant and make it possible to distinguish the restored from the original material, as it is recommended for the conservation and restoration of historical heritage.

An April 28, 2020 Pablo de Olavide University press release (also on Alpha Gallileo but published May 5, 2020), which originated the news item, provides more details about the nature of the materials,

“The tiny quantum dots, which are smaller than 10 nanometres, are made of zinc oxide and are semiconductors, which gives them very interesting properties (different from those of larger particles due to quantum mechanics), such as fluorescence, which is the one we use,” explains Javier Becerra, one of the authors.

“Thanks to the fluorescence of these quantum dots, we can evaluate the suitability of the treatment for a monument,” he adds. “We only need to illuminate with ultraviolet light a cross-section of the treated material to determine how far the consolidating matter has penetrated.”

In addition, the product, which the authors have named Nanorepair UV, acts as a consolidant due to the presence of the lime nanoparticles. Consolidation is a procedure that increases the degree of cohesion of a material, reinforcing and hardening the parts that have suffered some deterioration, which is frequent in historical buildings.

The researchers have successfully applied their technique to samples collected in the historic quarries of El Puerto de Santa María and Espera (Cadiz), from where the stone used to build such iconic monuments as Seville Cathedral, a World Heritage Site since 1987, or the town’s city hall, was extracted.

“In the laboratory, we thus obtain an approximation of how the treatment will behave when it is actually applied to the monuments,” says Becerra, who together with the rest of the team, is currently also testing mortars from the Italica and Medina Azahara archaeological sites.

Oddly, this work is not all that recently published. In any event, here’s a link to and a citation for the paper,

Nanolimes doped with quantum dots for stone consolidation assessment by Javier Becerra, Pilar Ortiz, José María Martín, Ana Paula Zaderenko. Construction and Building Materials Volume 199, 28 February 2019, Pages 581-593 DOI: https://doi.org/10.1016/j.conbuildmat.2018.12.077 Available online 19 December 2018

This paper is behind a paywall.

Comedy club performances show how robots and humans connect via humor

Caption: Naomi Fitter and Jon the Robot. Credit: Johanna Carson, OSU College of Engineering

Robot comedian is not my first thought on seeing that image; ventriloquist’s dummy is what came to mind. However, it’s not the first time I’ve been wrong about something. A May 19, 2020 news item on ScienceDaily reveals the truth about Jon, a comedian in robot form,

Standup comedian Jon the Robot likes to tell his audiences that he does lots of auditions but has a hard time getting bookings.

“They always think I’m too robotic,” he deadpans.

If raucous laughter follows, he comes back with, “Please tell the booking agents how funny that joke was.”

If it doesn’t, he follows up with, “Sorry about that. I think I got caught in a loop. Please tell the booking agents that you like me … that you like me … that you like me … that you like me.”

Jon the Robot, with assistance from Oregon State University researcher Naomi Fitter, recently wrapped up a 32-show tour of comedy clubs in greater Los Angeles and in Oregon, generating guffaws and more importantly data that scientists and engineers can use to help robots and people relate more effectively with one another via humor.

A May 18, 2020 Oregon State University (OSU) news release (also on EurekAlert), which originated the news item, delves furthers into this intriguing research,

“Social robots and autonomous social agents are becoming more and more ingrained in our everyday lives,” said Fitter, assistant professor of robotics in the OSU College of Engineering. “Lots of them tell jokes to engage users – most people understand that humor, especially nuanced humor, is essential to relationship building. But it’s challenging to develop entertaining jokes for robots that are funny beyond the novelty level.”

Live comedy performances are a way for robots to learn “in the wild” which jokes and which deliveries work and which ones don’t, Fitter said, just like human comedians do.

Two studies comprised the comedy tour, which included assistance from a team of Southern California comedians in coming up with material true to, and appropriate for, a robot comedian.

The first study, consisting of 22 performances in the Los Angeles area, demonstrated that audiences found a robot comic with good timing – giving the audience the right amounts of time to react, etc. – to be significantly more funny than one without good timing.

The second study, based on 10 routines in Oregon, determined that an “adaptive performance” – delivering post-joke “tags” that acknowledge an audience’s reaction to the joke – wasn’t necessarily funnier overall, but the adaptations almost always improved the audience’s perception of individual jokes. In the second study, all performances featured appropriate timing.

“In bad-timing mode, the robot always waited a full five seconds after each joke, regardless of audience response,” Fitter said. “In appropriate-timing mode, the robot used timing strategies to pause for laughter and continue when it subsided, just like an effective human comedian would. Overall, joke response ratings were higher when the jokes were delivered with appropriate timing.”

The number of performances, given to audiences of 10 to 20, provide enough data to identify significant differences between distinct modes of robot comedy performance, and the research helped to answer key questions about comedic social interaction, Fitter said.

“Audience size, social context, cultural context, the microphone-holding human presence and the novelty of a robot comedian may have influenced crowd responses,” Fitter said. “The current software does not account for differences in laughter profiles, but future work can account for these differences using a baseline response measurement. The only sensing we used to evaluate joke success was audio readings. Future work might benefit from incorporating additional types of sensing.”

Still, the studies have key implications for artificial intelligence efforts to understand group responses to dynamic, entertaining social robots in real-world environments, she said.

“Also, possible advances in comedy from this work could include improved techniques for isolating and studying the effects of comedic techniques and better strategies to help comedians assess the success of a joke or routine,” she said. “The findings will guide our next steps toward giving autonomous social agents improved humor capabilities.”

The studies were published by the Association for Computing Machinery [ACM]/Institute of Electrical and Electronics Engineering’s [IEEE] International Conference on Human-Robot Interaction [HRI].

Here’s another link to the two studies published in a single paper, which were first presented at the 2020 International Conference on Human-Robot Interaction [HRI]. along with a citation for the title of the published presentation,

Comedians in Cafes Getting Data: Evaluating Timing and Adaptivity in Real-World Robot Comedy Performance by John Vilk and Naomi T Fitter. HRI ’20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot InteractionMarch 2020 Pages 223–231 DOI: https://doi.org/10.1145/3319502.3374780

The paper is open access and the researchers have embedded an mp4 file which includes parts of the performances. Enjoy!

The Broad Institute gives us another reason to love CRISPR

More and more, this resembles a public relations campaign. First, CRISPR (clustered regularly interspersed short palindromic repeats) gene editing is going to be helpful with COVID-19 and now it can help us to deal with conservation issues. (See my May 26, 2020 posting about the latest CRISPR doings as of May 7, 2020; included is a brief description of the patent dispute between Broad Institute and UC Berkeley and musings about a public relations campaign.)

A May 21, 2020 news item on ScienceDaily announces how CRISPR could be useful for conservation,

The gene-editing technology CRISPR has been used for a variety of agricultural and public health purposes — from growing disease-resistant crops to, more recently, a diagnostic test for the virus that causes COVID-19. Now a study involving fish that look nearly identical to the endangered Delta smelt finds that CRISPR can be a conservation and resource management tool, as well. The researchers think its ability to rapidly detect and differentiate among species could revolutionize environmental monitoring.

Caption: Longfin smelt can be difficult to differentiate from endangered Delta smelt. Here, a longfin smelt is swabbed for genetic identification through a CRISPR tool called SHERLOCK. Credit: Alisha Goodbla/UC Davis

A May 21, 2020 University of California at Davis (UC Davis) news release (also on EurekAlert) by Kat Kerlin, which originated the news item, provides more detail (Note: A link has been removed),

The study, published in the journal Molecular Ecology Resources, was led by scientists at the University of California, Davis, and the California Department of Water Resources in collaboration with MIT Broad Institute [emphasis mine].

As a proof of concept, it found that the CRISPR-based detection platform SHERLOCK (Specific High-sensitivity Enzymatic Reporter Unlocking) [emphasis mine] was able to genetically distinguish threatened fish species from similar-looking nonnative species in nearly real time, with no need to extract DNA.

“CRISPR can do a lot more than edit genomes,” said co-author Andrea Schreier, an adjunct assistant professor in the UC Davis animal science department. “It can be used for some really cool ecological applications, and we’re just now exploring that.”

WHEN GETTING IT WRONG IS A BIG DEAL

The scientists focused on three fish species of management concern in the San Francisco Estuary: the U.S. threatened and California endangered Delta smelt, the California threatened longfin smelt and the nonnative wakasagi. These three species are notoriously difficult to visually identify, particularly in their younger stages.

Hundreds of thousands of Delta smelt once lived in the Sacramento-San Joaquin Delta before the population crashed in the 1980s. Only a few thousand are estimated to remain in the wild.

“When you’re trying to identify an endangered species, getting it wrong is a big deal,” said lead author Melinda Baerwald, a project scientist at UC Davis at the time the study was conceived and currently an environmental program manager with California Department of Water Resources.

For example, state and federal water pumping projects have to reduce water exports if enough endangered species, like Delta smelt or winter-run chinook salmon, get sucked into the pumps. Rapid identification makes real-time decision making about water operations feasible.

FROM HOURS TO MINUTES

Typically to accurately identify the species, researchers rub a swab over the fish to collect a mucus sample or take a fin clip for a tissue sample. Then they drive or ship it to a lab for a genetic identification test and await the results. Not counting travel time, that can take, at best, about four hours.

SHERLOCK shortens this process from hours to minutes. Researchers can identify the species within about 20 minutes, at remote locations, noninvasively, with no specialized lab equipment. Instead, they use either a handheld fluorescence reader or a flow strip that works much like a pregnancy test — a band on the strip shows if the target species is present.

“Anyone working anywhere could use this tool to quickly come up with a species identification,” Schreier said.

OTHER CRYPTIC CRITTERS

While the three fish species were the only animals tested for this study, the researchers expect the method could be used for other species, though more research is needed to confirm. If so, this sort of onsite, real-time capability may be useful for confirming species at crime scenes, in the animal trade at border crossings, for monitoring poaching, and for other animal and human health applications.

“There are a lot of cryptic species we can’t accurately identify with our naked eye,” Baerwald said. “Our partners at MIT are really interested in pathogen detection for humans. We’re interested in pathogen detection for animals as well as using the tool for other conservation issues.”

Here’s a link to and a citation for the paper,

Rapid and accurate species identification for ecological studies and monitoring using CRISPR‐based SHERLOCK by Melinda R. Baerwald, Alisha M. Goodbla, Raman P. Nagarajan, Jonathan S. Gootenberg, Omar O. Abudayyeh, Feng Zhang, Andrea D. Schreier. Molecular Ecology Resources https://doi.org/10.1111/1755-0998.13186 First published: 12 May 2020

This paper is behind a paywall.

The business of CRISPR

SHERLOCK™, is a trademark for what Sherlock Biosciences calls one of its engineering biology platforms. From the Sherlock Biosciences Technology webpage,

What is SHERLOCK™?

SHERLOCK is an evolution of CRISPR technology, which others use to make precise edits in genetic code. SHERLOCK can detect the unique genetic fingerprints of virtually any DNA or RNA sequence in any organism or pathogen. Developed by our founders and licensed exclusively from the Broad Institute, SHERLOCK is a method for single molecule detection of nucleic acid targets and stands for Specific High Sensitivity Enzymatic Reporter unLOCKing. It works by amplifying genetic sequences and programming a CRISPR molecule to detect the presence of a specific genetic signature in a sample, which can also be quantified. When it finds those signatures, the CRISPR enzyme is activated and releases a robust signal. This signal can be adapted to work on a simple paper strip test, in laboratory equipment, or to provide an electrochemical readout that can be read with a mobile phone.

However, things get a little more confusing when you look at the Broad Institute’s Developing Diagnostics and Treatments webpage,

Ensuring the SHERLOCK diagnostic platform is easily accessible, especially in the developing world, where the need for inexpensive, reliable, field-based diagnostics is the most urgent

SHERLOCK (Specific High-sensitivity Enzymatic Reporter unLOCKing) is a CRISPR-based diagnostic tool that is rapid, inexpensive, and highly sensitive, with the potential to have a transformative effect on research and global public health. The SHERLOCK platform can detect viruses, bacteria, or other targets in clinical samples such as urine or blood, and reveal results on a paper strip — without the need for extensive specialized equipment. This technology could potentially be used to aid the response to infectious disease outbreaks, monitor antibiotic resistance, detect cancer, and more. SHERLOCK tools are freely available [emphasis mine] for academic research worldwide, and the Broad Institute’s licensing framework [emphasis mine] ensures that the SHERLOCK diagnostic platform is easily accessible in the developing world, where inexpensive, reliable, field-based diagnostics are urgently needed.

Here’s what I suspect. as stated, the Broad Institute has free SHERLOCK licenses for academic institutions and not-for-profit organizations but Sherlock Biosciences, a Broad Institute spinoff company, is for-profit and has trademarked SHERLOCK for commercial purposes.

Final thoughts

This looks like a relatively subtle campaign to influence public perceptions. Genetic modification or genetic engineering as exemplified by the CRISPR gene editing technique is a force for the good of all. It will help us in our hour of need (COVID-19 pandemic) and it can help us save various species and better manage our resources.

This contrasts greatly with the publicity generated by the CRISPR twins situation where a scientist claimed to have successfully edited the germline for twins, Lulu and Nana. This was done despite a voluntary, worldwide moratorium on germline editing of viable embryos. (Search the terms [either here or on a standard search engine] ‘CRISPR twins’, ‘Lulu and Nana’, and/or ‘He Jiankui’ for details about the scandal.

In addition to presenting CRISPR as beneficial in the short term rather than the distant future, this publicity also subtly positions the Broad Institute as CRISPR’s owner.

Or, maybe I’m wrong. Regardless, I’m watching.

US Food and Drug Administration (FDA) gives first authorization for CRISPR (clustered regularly interspersed short palindromic repeats) use in COVID-19 crisis

Clustered regularly interspersed short palindromic repeats (CRISPR) gene editing has been largely confined to laboratory use or tested in agricultural trials. I believe that is true worldwide excepting the CRISPR twin scandal. (There are numerous postings about the CRISPR twins here including a Nov. 28, 2018 post, a May 17, 2019 post, and a June 20, 2019 post. Update: It was reported (3rd. para.) in December 2019 that He had been sentenced to three years jail time.)

Connie Lin in a May 7, 2020 article for Fast Company reports on this surprising decision by the US Food and Drug Administration (FDA), Note: A link has been removed),

The U.S. Food and Drug Administration has granted Emergency Use Authorization to a COVID-19 test that uses controversial gene-editing technology CRISPR.

This marks the first time CRISPR has been authorized by the FDA, although only for the purpose of detecting the coronavirus, and not for its far more contentious applications. The new test kit, developed by Cambridge, Massachusetts-based Sherlock Biosciences, will be deployed in laboratories certified to carry out high-complexity procedures and is “rapid,” returning results in about an hour as opposed to those that rely on the standard polymerase chain reaction method, which typically requires six hours.

The announcement was made in the FDA’s Coronavirus (COVID-19) Update: May 7, 2020 Daily Roundup (4th item in the bulleted list), Or, you can read the May 6, 2020 letter (PDF) sent to John Vozella of Sherlock Biosciences by the FDA.

As well, there’s the May 7, 2020 Sherlock BioSciences news release (the most informative of the lot),

Sherlock Biosciences, an Engineering Biology company dedicated to making diagnostic testing better, faster and more affordable, today announced the company has received Emergency Use Authorization (EUA) from the U.S. Food and Drug Administration (FDA) for its Sherlock™ CRISPR SARS-CoV-2 kit for the detection of the virus that causes COVID-19, providing results in approximately one hour.

“While it has only been a little over a year since the launch of Sherlock Biosciences, today we have made history with the very first FDA-authorized use of CRISPR technology, which will be used to rapidly identify the virus that causes COVID-19,” said Rahul Dhanda, co-founder, president and CEO of Sherlock Biosciences. “We are committed to providing this initial wave of testing kits to physicians, laboratory experts and researchers worldwide to enable them to assist frontline workers leading the charge against this pandemic.”

The Sherlock™ CRISPR SARS-CoV-2 test kit is designed for use in laboratories certified under the Clinical Laboratory Improvement Amendments of 1988 (CLIA), 42 U.S.C. §263a, to perform high complexity tests. Based on the SHERLOCK method, which stands for Specific High-sensitivity Enzymatic Reporter unLOCKing, the kit works by programming a CRISPR molecule to detect the presence of a specific genetic signature – in this case, the genetic signature for SARS-CoV-2 – in a nasal swab, nasopharyngeal swab, oropharyngeal swab or bronchoalveolar lavage (BAL) specimen. When the signature is found, the CRISPR enzyme is activated and releases a detectable signal. In addition to SHERLOCK, the company is also developing its INSPECTR™ platform to create an instrument-free, handheld test – similar to that of an at-home pregnancy test – that utilizes Sherlock Biosciences’ Synthetic Biology platform to provide rapid detection of a genetic match of the SARS-CoV-2 virus.

“When our lab collaborated with Dr. Feng Zhang’s team to develop SHERLOCK, we believed that this CRISPR-based diagnostic method would have a significant impact on global health,” said James J. Collins, co-founder and board member of Sherlock Biosciences and Termeer Professor of Medical Engineering and Science for MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering. “During what is a major healthcare crisis across the globe, we are heartened that the first FDA-authorized use of CRISPR will aid in the fight against this global COVID-19 pandemic.”

Access to rapid diagnostics is critical for combating this pandemic and is a primary focus for Sherlock Biosciences co-founder and board member, David R. Walt, Ph.D., who co-leads the Mass [Massachusetts] General Brigham Center for COVID Innovation.

“SHERLOCK enables rapid identification of a single alteration in a DNA or RNA sequence in a single molecule,” said Dr. Walt. “That precision, coupled with its capability to be deployed to multiplex over 100 targets or as a simple point-of-care system, will make it a critical addition to the arsenal of rapid diagnostics already being used to detect COVID-19.”

This development is particularly interesting since there was a major intellectual property dispute over CRISPR between the Broad Institute (a Harvard University and Massachusetts Institute of Technology [MIT] joint initiative), and the University of California at Berkeley (UC Berkeley). The Broad Institute mostly won in the first round of the patent fight, as I noted in a March 15, 2017 post but, as far as I’m aware, UC Berkeley is still disputing that decision.

In the period before receiving authorization, it appears that Sherlock Biosciences was doing a little public relations and ‘consciousness raising’ work. Here’s a sample from a May 5, 2020 article by Sharon Begley for STAT (Note: Links have been removed),

The revolutionary genetic technique better known for its potential to cure thousands of inherited diseases could also solve the challenge of Covid-19 diagnostic testing, scientists announced on Tuesday. A team headed by biologist Feng Zhang of the McGovern Institute at MIT and the Broad Institute has repurposed the genome-editing tool CRISPR into a test able to quickly detect as few as 100 coronavirus particles in a swab or saliva sample.

Crucially, the technique, dubbed a “one pot” protocol, works in a single test tube and does not require the many specialty chemicals, or reagents, whose shortage has hampered the rollout of widespread Covid-19 testing in the U.S. It takes about an hour to get results, requires minimal handling, and in preliminary studies has been highly accurate, Zhang told STAT. He and his colleagues, led by the McGovern’s Jonathan Gootenberg and Omar Abudayyeh, released the protocol on their STOPCovid.science website.

Because the test has not been approved by the Food and Drug Administration, it is only for research purposes for now. But minutes before speaking to STAT on Monday, Zhang and his colleagues were on a conference call with FDA officials about what they needed to do to receive an “emergency use authorization” that would allow clinical use of the test. The FDA has used EUAs to fast-track Covid-19 diagnostics as well as experimental therapies, including remdesivir, after less extensive testing than usually required.

For an EUA, the agency will require the scientists to validate the test, which they call STOPCovid, on dozens to hundreds of samples. Although “it is still early in the process,” Zhang said, he and his colleagues are confident enough in its accuracy that they are conferring with potential commercial partners who could turn the test into a cartridge-like device, similar to a pregnancy test, enabling Covid-19 testing at doctor offices and other point-of-care sites.

“It could potentially even be used at home or at workplaces,” Zhang said. “It’s inexpensive, does not require a lab, and can return results within an hour using a paper strip, not unlike a pregnancy test. This helps address the urgent need for widespread, accurate, inexpensive, and accessible Covid-19 testing.” Public health experts say the availability of such a test is one of the keys to safely reopening society, which will require widespread testing, and then tracing and possibly isolating the contacts of those who test positive.

If you have time, do read Begley’s in full.

Entangling 15 trillion atoms is a hot and messy business

A May 15, 2020 news item on Nanowerk provides context for an announcement of a research breakthrough on quantum entanglement,

Quantum entanglement is a process by which microscopic objects like electrons or atoms lose their individuality to become better coordinated with each other. Entanglement is at the heart of quantum technologies that promise large advances in computing, communications and sensing, for example detecting gravitational waves.

Entangled states are famously fragile: in most cases even a tiny disturbance will undo the entanglement. For this reason, current quantum technologies take great pains to isolate the microscopic systems they work with, and typically operate at temperatures close to absolute zero.

The ICFO [Institute of Photonic Sciences; Spain] team, in contrast, heated a collection of atoms to 450 Kelvin, millions of times hotter than most atoms used for quantum technology. Moreover, the individual atoms were anything but isolated; they collided with each other every few microseconds, and each collision set their electrons spinning in random directions.

Caption: Artistic illustration of a cloud of atoms with pairs of particles entangled between each other, represented by the yellow-blue lines. Image credit: © ICFO

A May 15, 2020 (?) ICFO press release (also on EurekAlert), which originated the news item, delves further into details abut the research,

The researchers used a laser to monitor the magnetization of this hot, chaotic gas. The magnetization is caused by the spinning electrons in the atoms, and provides a way to study the effect of the collisions and to detect entanglement. What the researchers observed was an enormous number of entangled atoms – about 100 times more than ever before observed. They also saw that the entanglement is non-local – it involves atoms that are not close to each other. Between any two entangled atoms there are thousands of other atoms, many of which are entangled with still other atoms, in a giant, hot and messy entangled state.

What they also saw, as Jia Kong, first author of the study, recalls, “is that if we stop the measurement, the entanglement remains for about 1 millisecond, which means that 1000 times per second a new batch of 15 trillion atoms is being entangled. And you must think that 1 ms is a very long time for the atoms, long enough for about fifty random collisions to occur. This clearly shows that the entanglement is not destroyed by these random events. This is maybe the most surprising result of the work”.

The observation of this hot and messy entangled state paves the way for ultra-sensitive magnetic field detection. For example, in magnetoencephalography (magnetic brain imaging), a new generation of sensors uses these same hot, high-density atomic gases to detect the magnetic fields produced by brain activity. The new results show that entanglement can improve the sensitivity of this technique, which has applications in fundamental brain science and neurosurgery.

As ICREA [Catalan Institution for Research and Advanced Studies] Prof. at ICFO Morgan Mitchell states, “this result is surprising, a real departure from what everyone expects of entanglement.” He adds “we hope that this kind of giant entangled state will lead to better sensor performance in applications ranging from brain imaging to self-driving cars to searches for dark matter

A Spin Singlet and QND

A spin singlet is one form of entanglement where the multiple particles’ spins–their intrinsic angular momentum–add up to 0, meaning the system has zero total angular momentum. In this study, the researchers applied quantum non-demolition (QND) measurement to extract the information of the spin of trillions of atoms. The technique passes laser photons with a specific energy through the gas of atoms. These photons with this precise energy do not excite the atoms but they themselves are affected by the encounter. The atoms’ spins act as magnets to rotate the polarization of the light. By measuring how much the photons’ polarization has changed after passing through the cloud, the researchers are able to determine the total spin of the gas of atoms.

The SERF regime

Current magnetometers operate in a regime that is called SERF, far away from the near absolute zero temperatures that researchers typically employ to study entangled atoms. In this regime, any atom experiences many random collisions with other neighbouring atoms, making collisions the most important effect on the state of the atom. In addition, because they are in a hot medium rather than an ultracold one, the collisions rapidly randomize the spin of the electrons in any given atom. The experiment shows, surprisingly, that this kind of disturbance does not break the entangled states, it merely passes the entanglement from one atom to another.

Here’s a link to and a citation for the paper,

Measurement-induced, spatially-extended entanglement in a hot, strongly-interacting atomic system by Jia Kong, Ricardo Jiménez-Martínez, Charikleia Troullinou, Vito Giovanni Lucivero, Géza Tóth & Morgan W. Mitchell. Nature Communications volume 11, Article number: 2415 (2020) DOI: https://doi.org/10.1038/s41467-020-15899-1 Published1 5 May 2020

This paper is open access.

Artificial intelligence (AI) consumes a lot of energy but tree-like memory may help conserve it

A simulation of a quantum material’s properties reveals its ability to learn numbers, a test of artificial intelligence. (Purdue University image/Shakti Wadekar)

A May 7, 2020 Purdue University news release (also on EurekAlert) describes a new approach for energy-efficient hardware in support of artificial intelligence (AI) systems,

To just solve a puzzle or play a game, artificial intelligence can require software running on thousands of computers. That could be the energy that three nuclear plants produce in one hour.

A team of engineers has created hardware that can learn skills using a type of AI that currently runs on software platforms. Sharing intelligence features between hardware and software would offset the energy needed for using AI in more advanced applications such as self-driving cars or discovering drugs.

“Software is taking on most of the challenges in AI. If you could incorporate intelligence into the circuit components in addition to what is happening in software, you could do things that simply cannot be done today,” said Shriram Ramanathan, a professor of materials engineering at Purdue University.

AI hardware development is still in early research stages. Researchers have demonstrated AI in pieces of potential hardware, but haven’t yet addressed AI’s large energy demand.

As AI penetrates more of daily life, a heavy reliance on software with massive energy needs is not sustainable, Ramanathan said. If hardware and software could share intelligence features, an area of silicon might be able to achieve more with a given input of energy.

Ramanathan’s team is the first to demonstrate artificial “tree-like” memory in a piece of potential hardware at room temperature. Researchers in the past have only been able to observe this kind of memory in hardware at temperatures that are too low for electronic devices.

The results of this study are published in the journal Nature Communications.

The hardware that Ramanathan’s team developed is made of a so-called quantum material. These materials are known for having properties that cannot be explained by classical physics. Ramanathan’s lab has been working to better understand these materials and how they might be used to solve problems in electronics.

Software uses tree-like memory to organize information into various “branches,” making that information easier to retrieve when learning new skills or tasks.

The strategy is inspired by how the human brain categorizes information and makes decisions.

“Humans memorize things in a tree structure of categories. We memorize ‘apple’ under the category of ‘fruit’ and ‘elephant’ under the category of ‘animal,’ for example,” said Hai-Tian Zhang, a Lillian Gilbreth postdoctoral fellow in Purdue’s College of Engineering. “Mimicking these features in hardware is potentially interesting for brain-inspired computing.”

The team introduced a proton to a quantum material called neodymium nickel oxide. They discovered that applying an electric pulse to the material moves around the proton. Each new position of the proton creates a different resistance state, which creates an information storage site called a memory state. Multiple electric pulses create a branch made up of memory states.

“We can build up many thousands of memory states in the material by taking advantage of quantum mechanical effects. The material stays the same. We are simply shuffling around protons,” Ramanathan said.

Through simulations of the properties discovered in this material, the team showed that the material is capable of learning the numbers 0 through 9. The ability to learn numbers is a baseline test of artificial intelligence.

The demonstration of these trees at room temperature in a material is a step toward showing that hardware could offload tasks from software.

“This discovery opens up new frontiers for AI that have been largely ignored because implementing this kind of intelligence into electronic hardware didn’t exist,” Ramanathan said.

The material might also help create a way for humans to more naturally communicate with AI.

“Protons also are natural information transporters in human beings. A device enabled by proton transport may be a key component for eventually achieving direct communication with organisms, such as through a brain implant,” Zhang said.

Here’s a link to and a citation for the published study,

Perovskite neural trees by Hai-Tian Zhang, Tae Joon Park, Shriram Ramanathan. Nature Communications volume 11, Article number: 2245 (2020) DOI: https://doi.org/10.1038/s41467-020-16105-y Published: 07 May 2020

This paper is open access.