Study says quantum computing will radically alter the application of copyright law

I was expecting more speculation about the possibilities that quantum computing might afford with regard to copyright law. According to the press release, this study is primarily focused on the impact that greater computing speed and power will have on copyright and, presumably, other forms of intellectual property. From a March 4, 2024 University of Exeter press release (also on EurekAlert),

Quantum computing will radically transform the application of the law – challenging long-held notions of copyright, a new study says.

Faster computing will bring exponentially greater possibilities in the tracking and tracing of the legal owners of art, music, culture and books.  

This is likely to mean more copyright infringements, but also make it easier for lawyers to clamp down on lawbreaking. However, faster computers will also be able to potentially break and get around certain older enforcement technologies.

The research says quantum computing could lead to an “exponentially” greater number of re-uses of copyright works without permission, and tracking of anyone breaking the law is likely to be possible in many circumstances.

Dr James Griffin, from the University of Exeter [UK] Law School, who led the study, said: “Quantum computers will have sufficient computing power to be able to make judgement calls [emphasis mine] as to whether or not re-uses are likely to be copyright infringements, skirting the boundaries of the law in a way that has yet to be fully tested in practice.

“Copyright infringements could become more commonplace due to the use of quantum computers, but the enforcement of such laws could also increase. This will potentially favour certain forms of content over others.”

Content with embedded quantum watermarks will be more likely to be protected than earlier forms of content without such watermarks. The exponential speed of quantum computing brings will make it easier to be able to produce more copies of existing copyright works.

Existing artworks will be altered on a large scale for use in AI-generated artistic works. Enhanced computing power will see the reuse of elements of films such as scenes, characters, music and scripts.

Dr Griffin said: “The nature of quantum computing also means that there could be more enforcement of copyright law. we can expect that there will be more use of technological protection measures, as well as copyright management information devices such as watermarks, and more use of filtering mechanisms to be able to detect, prevent and contain copyright infringements.

Copyright management information techniques are better suited to quantum computers because they allow for more finely grained analysis of potential infringements, and because they require greater computing power to be able to be applied both broadly to computer software and the actions of the users of such software.

Dr Griffin said: “A quantum paradox [emphasis mine] is thus developing, in that there are likely to be more infringements possible, whilst technical devices will simultaneously develop in an attempt to prevent any alleged possible or potential copyright infringements. Content will increasingly be made in a manner difficult to break, with enhanced encryption.

“Meanwhile, due to the expense of large-scale quantum computing, we can expect more content to be streamed and less owned; content will be kept remotely in order to enhance the notion that utilising such data in breach of contractual terms would be akin to breaking into someone’s physical house or committing a similar fraudulent activity.

Quantum computers allow enable creators to make a large number of small-scale works. This could pose challenges regarding the tests of copyright originality. For example story written for a quantum computer game could be constantly changing and evolving according to the actions of the player, and not just simply according to predefined paths but utilising complex AI algorithms. [emphasis mine]

Some interesting issues are raised in this press release. (1) Can any computer, quantum or otherwise, make a judgment call? (2) The ‘quantum paradox’ seems like a perfectly predictable outcome. After all, regular computers facilitated all kinds of new opportunities for infringement and prevention. What makes this a ‘quantum paradox’? (3) The evolving computer game seems more like an AI issue. What makes this a quantum computing problem? The answers to these questions may be in the study but that presents a problem.

Ordinarily, I’d offer a link to the study but it’s not accessible until 2025. Here’s a citation,

Quantum Computing and Copyright Law: A Wave of Change or a Mere Irrelevant Particle? by James G. H. Griffin. Intellectual Property Quarterly 2024 Issue 1, pp. 22 – 39. Published February 21, 2024. Under embargo until 21 February 2025 [emphasis mine] in compliance with publisher policy

There is an online record for the study on this Open Research Exeter (ORE) webpage where you can request a copy of the paper.

Archaeomagnetism, anomalies in space, and 3,000-year-old Babylonian bricks

While i don’t usually cover the topic of magnetic fields, this fascinating research required a combination of science and the humanities, a topic of some interest to me. First, there’s the news and then excerpts from Rae Hodge’s December 25, 2023 commentary “How 3,000-year-old Babylonian tablets help scientists unravel one of the weirdest mysteries in space” for Salon.

A December 19, 2023 University College London (UCL; also on EurekAlert but published December 18, 2023) explains how Babylonian artefacts led to a discovery about earth’s magnetic fields,

Ancient bricks inscribed with the names of Mesopotamian kings have yielded important insights into a mysterious anomaly in Earth’s magnetic field 3,000 years ago, according to a new study involving UCL researchers.

The research, published in the Proceedings of the National Academy of Sciences (PNAS), describes how changes in the Earth’s magnetic field imprinted on iron oxide grains within ancient clay bricks, and how scientists were able to reconstruct these changes from the names of the kings inscribed on the bricks.

The team hopes that using this “archaeomagnetism,” which looks for signatures of the Earth’s magnetic field in archaeological items, will improve the history of Earth’s magnetic field, and can help better date artefacts that they previously couldn’t.

Co-author Professor Mark Altaweel (UCL Institute of Archaeology) said: “We often depend on dating methods such as radiocarbon dates to get a sense of chronology in ancient Mesopotamia. However, some of the most common cultural remains, such as bricks and ceramics, cannot typically be easily dated because they don’t contain organic material. This work now helps create an important dating baseline that allows others to benefit from absolute dating using archaeomagnetism.”

The Earth’s magnetic field weakens and strengthens over time, changes which imprint a distinct signature on hot minerals that are sensitive to the magnetic field. The team analysed the latent magnetic signature in grains of iron oxide minerals embedded in 32 clay bricks originating from archaeological sites throughout Mesopotamia, which now overlaps with modern day Iraq. The strength of the planet’s magnetic field was imprinted upon the minerals when they were first fired by the brickmakers thousands of years ago.

At the time they were made, each brick was inscribed with the name of the reigning king which archaeologists have dated to a range of likely timespans. Together, the imprinted name and the measured magnetic strength of the iron oxide grains offered a historical map of the changes to the strength of the Earth’s magnetic field.

The researchers were able to confirm the existence of the “Levantine Iron Age geomagnetic Anomaly,” a period when Earth’s magnetic field was unusually strong around modern Iraq between about 1050 to 550 BCE for unclear reasons. Evidence of the anomaly has been detected as far away as China, Bulgaria and the Azores, but data from within the southern part of the Middle East itself had been sparse.

Lead author Professor Matthew Howland of Wichita State University said: “By comparing ancient artefacts to what we know about ancient conditions of the magnetic field, we can estimate the dates of any artifacts that were heated up in ancient times.”

To measure the iron oxide grains, the team carefully chipped tiny fragments from broken faces of the bricks and used a magnetometer to precisely measure the fragments.

By mapping out the changes in Earth’s magnetic field over time, this data also offers archaeologists a new tool to help date some ancient artefacts. The magnetic strength of iron oxide grains embedded within fired items can be measured and then matched up to the known strengths of Earth’s historic magnetic field. The reigns of kings lasted from years to decades, which offers better resolution than radiocarbon dating which only pinpoints an artefact’s date to within a few hundred years.

An additional benefit of the archaeomagnetic dating of the artefacts is it can help historians more precisely pinpoint the reigns of some of the ancient kings that have been somewhat ambiguous. Though the length and order of their reigns is well known, there has been disagreement within the archaeological community about the precise years they took the throne resulting from incomplete historical records. The researchers found that their technique lined up with an understanding of the kings’ reigns known to archaeologists as the “Low Chronology”.

The team also found that in five of their samples, taken during the reign of Nebuchadnezzar II from 604 to 562 BCE, the Earth’s magnetic field seemed to change dramatically over a relatively short period of time, adding evidence to the hypothesis that rapid spikes in intensity are possible.

Co-author Professor Lisa Tauxe of the Scripps Institution of Oceanography (US) said: “The geomagnetic field is one of the most enigmatic phenomena in earth sciences. The well-dated archaeological remains of the rich Mesopotamian cultures, especially bricks inscribed with names of specific kings, provide an unprecedented opportunity to study changes in the field strength in high time resolution, tracking changes that occurred over several decades or even less.”

The research was carried out with funding from the U.S.-Israel Binational Science Foundatio

Here’s a link to and a citation for the paper,

Exploring geomagnetic variations in ancient Mesopotamia: Archaeomagnetic study of inscribed bricks from the 3rd–1st millennia BCE by Matthew D. Howland, Lisa Tauxe, Shai Gordin, and Erez Ben-Yosef. PNAS (Proceedings of the National Academy of Sciences) December 18, 2023 120 (52) e2313361120 DOI: https://doi.org/10.1073/pnas.2313361120

This paper is behind a paywall.

The Humanities and their importance to STEM (science, technology, engineering, and mathematics)

Rae Hodge’s December 25, 2023 commentary explains why magnetic fields might be of interest to a member of the general public (that’s me) and more about the interdisciplinarity, which drove the project, Note 1: This is a US-centric view but the situation in Canada (and I suspect elsewhere) is similar. Note 2: Links have been removed,

Among the most enigmatic mysteries of modern science are the strange anomalies which appear from time to time in the earth’s geomagnetic field. It can seem like the laws of physics behave differently in some places, with unnerving and bizarre results — spacecraft become glitchy, the Hubble Space Telescope can’t capture observations and satellite communications go on the fritz. Some astronauts orbiting past the anomalies report blinding flashes of light and sudden silence. They call one of these massive, growing anomalies the Bermuda Triangle of space — and even NASA [US National Aeronautics and Space Administration] is now tracking it. 

With all the precisely tuned prowess of modern tech turning its eye toward these geomagnetic oddities, you might not expect that some key scientific insights about them could be locked inside a batch of 3,000-year-old Babylonian cuneiform tablets. But that’s exactly what a recently published study in Proceedings of the National Academy of Sciences suggests. 

This newly discovered connection between ancient Mesopotamian writing and modern physics is more than an amusing academic fluke. It highlights just how much is at stake for 21st-century scientific progress when budget-slashing lawmakers, university administrators and private industry investors shovel funding into STEM field development while neglecting — and in some case, actively destroying — the humanities.

… Despite advances in the past five years or so, archaeomagnetism is still methodologically complex and often tedious work, often cautious data sifting to arrive at accurate interpretations. The more accurate of which come from analyzing layers upon layers of strata. 

But when combined with the expertise of the humanities — from historians and linguists, to religious scholars and anthropologists? Archaeomagnetism opens up new worlds of study across all disciplines. 

In fact, the team’s results show that the strength of the magnetic field in Mesopotamia was more than one and a half times stronger than it is in the area today, with a massive spike happening sometimes between 604 B.C. and 562 B.C. By combining the results of archaeomagnetic tests and the transcriptions of ancient languages on the bricks, the team was able to confirm this spike likely occurred during the reign of Nebuchadnezzar II.

Hand in hand with the sciences, the LIAA [Levantine Iron Age Anomaly] trail was illuminated by historical accounts of descriptively similar events, recorded from ancient authors as far west as the Iberian peninsula and well into Asia. Archaeomagnetism has now allowed researchers to not only confirm the presence of the LIAA in ancient Mesopotamia from 1050 to 550 B.C. — itself a first for science — but offers cultural historians a new way to verify and apply context to a vast tide of early scientific information.

Hodge further explores the importance of interdisciplinary work, December 25, 2023 commentary, Note: Links have been removed,

The symbiotic interdependence between the humanities and sciences deepens further in the thicket of time when one considers that the original locations of the team’s fragments likely include the earliest known centers of astrology and mathematics in Sumeria, such as Nineveh near modern-day Mosul, Iraq. At the ancient city’s royal library of the Assyrian Empire, a site dating back to around 650 B.C., a trove of thousands of tablets were excavated in the mid-1800s containing precise astronomical data surpassing that found in any previous discovery.

Among those, the “The Plough Star” tablets bear inscriptions dating to 687 B.C. and are the first known instances of humans tracking lunar and planetary orbits through both the solar ecliptic and 17 constellations. The same trove yielded the awe-striking collection known as the Astronomical Diaries, currently held in the Ashmolean Museum at Oxford, originating from near modern-day Baghdad. The oldest of which dates to 652 B.C. The latest, 61 B.C.

Hermann Hunger and David Pingree, the foremost historians on their excavation, minced no words on their value to to modern science. 

“That someone in the middle of the eighth century BC conceived of such a scientific program and obtained support for it is truly astonishing; that it was designed so well is incredible; and that it was faithfully carried out for 700 years is miraculous,” they wrote.  

In his 2021 book, “A Scheme of Heaven,” data scientist Alexander Boxer cites the two historians and observes that the “enormity of this achievement” lay in the diaries’ preservation of a snapshot of celestial knowledge of the age which — paired with accounts of weather patterns, river water tables, grain prices and even political news — allow us to pinpoint historical events from thousands of years ago, in time-windows as narrow as just a day or two.

“Rivaled only by the extraordinary astronomical records from ancient China, the Babylonian Astronomical Diaries are one of, if not the longest continuous research program ever undertaken,” writes Boxer. 

The cuneiform tablets studied by the UCL team extend this interdisciplinary legacy of the sciences and humanities beautifully by allowing us to read not only the celestially relevant data of geomagnetic history, but by reaffirming the importance of early cultural studies. One fragment, for instance, is dedicated by Nebuchadnezzar II to a temple in Larsa. The site was devoted to carrying out astrological divination traditions, and it’s where we get our earliest clue about the authorship of the Astronomical Diaries. 

Charmingly, that clue appears in the court testimony of a temple official who gets scolded for sounding a false-alarm about an eclipse, embarrassing the temple scholars in front of the whole city.

These Neo-Assyrian and Old Babylonian astrologers gave us more than antics, though. In further records at Nineveh, they would ultimately help researchers at the University of Tsukuba [Japan] — some 2,700 years later — track what were likely massive solar magnetic storms in the area, enabled by geomagnetic disruptions that may be yet linked to the LIAA.

In their dutifully recorded daily observations, one astrologer records a “red cloud” while another tablet-writer observes that “red covers the sky” in Babylon.

“These were probably manifestations of what we call today stable auroral red arcs, consisting of light emitted by electrons in atmospheric oxygen atoms after being excited by intense magnetic fields,” the authors said. “These findings allow us to recreate the history of solar activity a century earlier than previously available records…This research can assist in our ability to predict future solar magnetic storms, which may damage satellites and other spacecraft.”

Hodge ends with an observation, from her December 25, 2023 commentary,

When universities short sell the arts and humanities, we humanities students might lose our poetry, but we can write more. The science folk, on the other hand, might cost themselves another 75 years of research and $70 billion in grants trying to re-invent the Babylonian wheel because the destruction of its historical blueprint was “an arts problem.”

If you have time, do read Hodge’s December 25, 2023 commentary.

Project M: over1000 scientists, 110 schools, 800 samples, and U.K.’s synchrotron

A January 29, 2024 Diamond Light Source (UK synchrotron) press release (also on EurekAlert) announced results from a major school citizen science project,

Results of a large-scale innovative Citizen Science experiment called Project M which involved over 1000 scientists, 800 samples and 110 UK secondary schools in a huge experiment will be published in the prestigious RSC (Royal Society of Chemistry) journal CrystEngComm on 29 January 2024. The paper is titled: “Project M: Investigating the effect of additives on calcium carbonate crystallisation through a school citizen science program”. The paper shares a giant set of results from the school citizen scientists who collaborated with a team at Diamond to find out how different additives affect the different forms of calcium carbonate produced. These additives affect the type of calcium carbonate that forms, and thus its properties and potential applications. Being able to easily produce different forms of calcium carbonate could be very important for manufacturing.

Lead authors Claire Murray, Visiting Scientist at Diamond and Julia Parker, Diamond Principal Beamline Scientist and expert in calcium carbonate science who conceptualised the project, analysed the data, wrote and edited the manuscript explain that despite nature’s ability to precisely control calcium carbonate formation in shells and skeletons, laboratories around the world are often unable to exact the same level of control over how calcium carbonate forms. Nature uses molecules like amino acids and proteins to direct the formation of calcium carbonate, so we were interested in discovering how some of these molecules affect the calcium carbonate that we make in the lab.

Project M engaged the students and teachers as scientists, making different samples of calcium carbonate under varying conditions with different additives. 800 of these samples were then analysed in just 24 hours in April 2017 using the X-ray powder diffraction technique at on beamline I11 at Diamond Light Source, the UK’s national synchrotron. This created a giant set of results which form the basis of the publication. A systematic study of this scale has never been completed anywhere else in the world.

The goal of this project was to find out how using different additives like amino acids affect the structure of the calcium carbonate. The mineral has three main forms called ‘polymorphs’ – vaterite, calcite and aragonite – which can be identified using X-ray powder diffraction at Diamond’s beamline l11. Diamond Light Source produces one of the brightest X-ray beams on planet Earth, which allow scientists to understand the atomic structure of materials. Scientists come from all over the UK and further afield to use these X-rays – as well as infrared and ultraviolet light – to make better drugs, understand the natural world, and create futuristic materials. Understanding the impact of different additives on the production of polymorphs is of huge interest in industry such as in manufacturing, medical applications such as tissue engineering and the design of drug-delivery systems, and even cosmetics.

However, mapping such a large parameter space, in terms of additive and concentration, requires the synthesis of a large number of samples and the provision of high throughput analysis techniques. It presented an exciting opportunity to collaborate with 110 secondary schools making real samples to showcase the high-throughput capability of the beamline, including rapid robotic changing of samples, which means diffraction patterns can be collected and samples changed in less than 90 seconds.

“The project was led by a scientific question we had,” explained Claire Murray. “The idea to involve school students and teaching staff in the preparation of the samples followed naturally as we know Chemistry projects are underrepresented in the citizen science space. The contribution that student citizen scientists can make to research should not be underestimated. These projects can provide a powerful way for researchers to access volumes of data they might struggle to collect otherwise, as well as inspiring future generations of scientists.”

The project was designed with kit and resources to support the schools to learn new techniques and knowledge and to provide them with space to interact and engage with the experiment. After analysis at Diamond, the students had the opportunity to look at their results, see their peaks and determine what sort of polymorphs they had produced, and compare their results with the results obtained by different samples and different schools at different locations in the UK.

Gry E. Christensen, former student and Project M Scientist at Didcot Girls’ School, Didcot commented; “It was an amazing journey and I recommend that if any other schools have a chance to help with a similar project, then jump on board, because it is a once in a lifetime opportunity for the students, and you feel you can make a positive change to the world.”

“The fact that we didn’t know the answer yet was a motivational factor for the students,” explains Claire Murray. “The teachers told us they took everything more seriously, because this was real science in action – it really meant something. They shared how the students were excited to translate their lab skills to this experiment and that the students were able to contextualise their learning from their prescribed textbooks and lab classes. Teachers also highlighted their own interest and curiosity as many of them have trained as chemists in their education. They appreciated the connection to real science for themselves and the opportunity for continued professional development.”

‘The project offered our pupils a unique opportunity to take part in genuine scientific research and should act as a blueprint for future projects that aim to engage young people in science beyond the classroom.’ Adds Matthew Wainwright, teacher and Project M Scientist at Kettlethorpe High School, Wakefield.

Exploring the role of amino acids in directing crystallisation with the Project M Scientists was an opportunity and an honour for the authors. Julia Parker explained; “In our work we see how we can draw novel scientific conclusions regarding the effect of amino acids on the structure of calcite and vaterite calcium carbonate polymorphs. This ability to explore a wide parameter space in sample conditions, whilst providing continued educational and scientific engagement benefits for the students and teachers involved, can we hope in future be applied to other materials synthesis investigations.”

Project M enabled schools to carry out real research and do an experiment that had never been done before, in their own school laboratory. It was the first ‘citizen science’ project run by Diamond, which transported Diamond science to schools and enabled the production of a considerable set of results, which has now resulted in this successful publication in CrystEngComm.

Here’s a link to and a citation for the paper, Note: the Project M Scientists are listed as authors,

Project M: investigating the effect of additives on calcium carbonate crystallisation through a school citizen science program by Claire A. Murray, Project M Scientists, Laura Holland, Rebecca O’Brien, Alice Richards, Annabelle R. Baker, Mark Basham, David Bond, Leigh D. Connor, Sarah J. Day, Jacob Filik, Stuart Fisher, Peter Holloway, Karl Levik, Ronaldo Mercado, Jonathan Potter, Chiu C. Tang, Stephen P. Thompson, and Julia E. Parker. CrystEngComm, 2024,26, 753-763 DOI: https://doi.org/10.1039/D3CE01173A First published: 29 Jan 2024

This paper is open access.

Detect lung cancer early by inhaling a nanosensor

The technology described in a January 5, 2024 news item on Nanowerk has not been tried in human clinical trials but early pre-clinical trial testing offers promise,

Using a new technology developed at MIT, diagnosing lung cancer could become as easy as inhaling nanoparticle sensors and then taking a urine test that reveals whether a tumor is present.

Key Takeaways

*This non-invasive approach may serve as an alternative or supplement to traditional CT scans, particularly beneficial in areas with limited access to advanced medical equipment.

*The technology focuses on detecting cancer-linked proteins in the lungs, with results obtainable through a simple paper test strip.

*Designed for early-stage lung cancer detection, the method has shown promise in animal models and may soon advance to human clinical trials.

*This innovation holds potential for significantly improving lung cancer screening and early detection, especially in low-resource settings.

A January 5, 2024 Massachusetts Institute of Technology (MIT) news release (also on EurkeAlert), which originated the news item, goes on to provide some technical details,

The new diagnostic is based on nanosensors that can be delivered by an inhaler or a nebulizer. If the sensors encounter cancer-linked proteins in the lungs, they produce a signal that accumulates in the urine, where it can be detected with a simple paper test strip.

This approach could potentially replace or supplement the current gold standard for diagnosing lung cancer, low-dose computed tomography (CT). It could have an especially significant impact in low- and middle-income countries that don’t have widespread availability of CT scanners, the researchers say.

“Around the world, cancer is going to become more and more prevalent in low- and middle-income countries. The epidemiology of lung cancer globally is that it’s driven by pollution and smoking, so we know that those are settings where accessibility to this kind of technology could have a big impact,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science.

Bhatia is the senior author of the paper, which appears today [January 5, 2024] in Science Advances. Qian Zhong, an MIT research scientist, and Edward Tan, a former MIT postdoc, are the lead authors of the study.

Inhalable particles

To help diagnose lung cancer as early as possible, the U.S. Preventive Services Task Force recommends that heavy smokers over the age of 50 undergo annual CT scans. However, not everyone in this target group receives these scans, and the high false-positive rate of the scans can lead to unnecessary, invasive tests.

Bhatia has spent the last decade developing nanosensors for use in diagnosing cancer and other diseases, and in this study, she and her colleagues explored the possibility of using them as a more accessible alternative to CT screening for lung cancer.

These sensors consist of polymer nanoparticles coated with a reporter, such as a DNA barcode, that is cleaved from the particle when the sensor encounters enzymes called proteases, which are often overactive in tumors. Those reporters eventually accumulate in the urine and are excreted from the body.

Previous versions of the sensors, which targeted other cancer sites such as the liver and ovaries, were designed to be given intravenously. For lung cancer diagnosis, the researchers wanted to create a version that could be inhaled, which could make it easier to deploy in lower resource settings.

“When we developed this technology, our goal was to provide a method that can detect cancer with high specificity and sensitivity, and also lower the threshold for accessibility, so that hopefully we can improve the resource disparity and inequity in early detection of lung cancer,” Zhong says.

To achieve that, the researchers created two formulations of their particles: a solution that can be aerosolized and delivered with a nebulizer, and a dry powder that can be delivered using an inhaler.

Once the particles reach the lungs, they are absorbed into the tissue, where they encounter any proteases that may be present. Human cells can express hundreds of different proteases, and some of them are overactive in tumors, where they help cancer cells to escape their original locations by cutting through proteins of the extracellular matrix. These cancerous proteases cleave DNA barcodes from the sensors, allowing the barcodes to circulate in the bloodstream until they are excreted in the urine.

In the earlier versions of this technology, the researchers used mass spectrometry to analyze the urine sample and detect DNA barcodes. However, mass spectrometry requires equipment that might not be available in low-resource areas, so for this version, the researchers created a lateral flow assay, which allows the barcodes to be detected using a paper test strip.

The researchers designed the strip to detect up to four different DNA barcodes, each of which indicates the presence of a different protease. No pre-treatment or processing of the urine sample is required, and the results can be read about 20 minutes after the sample is obtained.

“We were really pushing this assay to be point-of-care available in a low-resource setting, so the idea was to not do any sample processing, not do any amplification, just to be able to put the sample right on the paper and read it out in 20 minutes,” Bhatia says.

Accurate diagnosis

The researchers tested their diagnostic system in mice that are genetically engineered to develop lung tumors similar to those seen in humans. The sensors were administered 7.5 weeks after the tumors started to form, a time point that would likely correlate with stage 1 or 2 cancer in humans.

In their first set of experiments in the mice, the researchers measured the levels of 20 different sensors designed to detect different proteases. Using a machine learning algorithm to analyze those results, the researchers identified a combination of just four sensors that was predicted to give accurate diagnostic results. They then tested that combination in the mouse model and found that it could accurately detect early-stage lung tumors.

For use in humans, it’s possible that more sensors might be needed to make an accurate diagnosis, but that could be achieved by using multiple paper strips, each of which detects four different DNA barcodes, the researchers say.

The researchers now plan to analyze human biopsy samples to see if the sensor panels they are using would also work to detect human cancers. In the longer term, they hope to perform clinical trials in human patients. A company called Sunbird Bio has already run phase 1 trials on a similar sensor developed by Bhatia’s lab, for use in diagnosing liver cancer and a form of hepatitis known as nonalcoholic steatohepatitis (NASH).

In parts of the world where there is limited access to CT scanning, this technology could offer a dramatic improvement in lung cancer screening, especially since the results can be obtained during a single visit.

“The idea would be you come in and then you get an answer about whether you need a follow-up test or not, and we could get patients who have early lesions into the system so that they could get curative surgery or lifesaving medicines,” Bhatia says.

Here’s a link to and a citation for the paper,

Inhalable point-of-care urinary diagnostic platform by Qian Zhong, Edward K. W. Tan, Carmen Martin-Alonso, Tiziana Parisi, Liangliang Hao, Jesse D. Kirkpatrick, Tarek Fadel, Heather E. Fleming, Tyler Jacks, and Sangeeta N. Bhatia. Science Advances 5 Jan 2024 Vol 10, Issue 1 DOI: 10.1126/sciadv.adj9591

This paper is open access.

Sunbird Bio (the company mentioned in the news release) can be found here.

Resurrection consent for digital cloning of the dead

It’s a bit disconcerting to think that one might be resurrected, in this case, digitally, but Dr Masaki Iwasaki has helpfully published a study on attitudes to digital cloning and resurrection consent, which could prove helpful when establishing one’s final wishes.

A January 4, 2024 De Gruyter (publisher) press release (repurposed from a January 4, 2024 blog posting on De Gruyter.com) explains the idea and the study,

In a 2014 episode of sci-fi series Black Mirror, a grieving young widow reconnects with her dead husband using an app that trawls his social media history to mimic his online language, humor and personality. It works. She finds solace in the early interactions – but soon wants more.   

Such a scenario is no longer fiction. In 2017, the company Eternime aimed to create an avatar of a dead person using their digital footprint, but this “Skype for the dead” didn’t catch on. The machine-learning and AI algorithms just weren’t ready for it. Neither were we.

Now, in 2024, amid exploding use of Chat GPT-like programs, similar efforts are on the way. But should digital resurrection be allowed at all? And are we prepared for the legal battles over what constitutes consent?

In a study published in the Asian Journal of Law and Economics, Dr Masaki Iwasaki of Harvard Law School and currently an assistant professor at Seoul National University, explores how the deceased’s consent (or otherwise) affects attitudes to digital resurrection.

US adults were presented with scenarios where a woman in her 20s dies in a car accident. A company offers to bring a digital version of her back, but her consent is, at first, ambiguous. What should her friends decide?

Two options – one where the deceased has consented to digital resurrection and another where she hasn’t – were read by participants at random. They then answered questions about the social acceptability of bringing her back on a five-point rating scale, considering other factors such as ethics and privacy concerns.

Results showed that expressed consent shifted acceptability two points higher compared to dissent. “Although I expected societal acceptability for digital resurrection to be higher when consent was expressed, the stark difference in acceptance rates – 58% for consent versus 3% for dissent – was surprising,” says Iwasaki. “This highlights the crucial role of the deceased’s wishes in shaping public opinion on digital resurrection.”

In fact, 59% of respondents disagreed with their own digital resurrection, and around 40% of respondents did not find any kind of digital resurrection socially acceptable, even with expressed consent. “While the will of the deceased is important in determining the societal acceptability of digital resurrection, other factors such as ethical concerns about life and death, along with general apprehension towards new technology are also significant,” says Iwasaki.  

The results reflect a discrepancy between existing law and public sentiment. People’s general feelings – that the dead’s wishes should be respected – are actually not protected in most countries. The digitally recreated John Lennon in the film Forrest Gump, or animated hologram of Amy Winehouse reveal the ‘rights’ of the dead are easily overridden by those in the land of the living.

So, is your digital destiny something to consider when writing your will? It probably should be but in the current absence of clear legal regulations on the subject, the effectiveness of documenting your wishes in such a way is uncertain. For a start, how such directives are respected varies by legal jurisdiction. “But for those with strong preferences documenting their wishes could be meaningful,” says Iwasaki. “At a minimum, it serves as a clear communication of one’s will to family and associates, and may be considered when legal foundations are better established in the future.”

It’s certainly a conversation worth having now. Many generative AI chatbot services, such as like Replika (“The AI companion who cares”) and Project December (“Simulate the dead”) already enable conversations with chatbots replicating real people’s personalities. The service ‘You, Only Virtual’ (YOV) allows users to upload someone’s text messages, emails and voice conversations to create a ‘versona’ chatbot. And, in 2020, Microsoft obtained a patent to create chatbots from text, voice and image data for living people as well as for historical figures and fictional characters, with the option of rendering in 2D or 3D.

Iwasaki says he’ll investigate this and the digital resurrection of celebrities in future research. “It’s necessary first to discuss what rights should be protected, to what extent, then create rules accordingly,” he explains. “My research, building upon prior discussions in the field, argues that the opt-in rule requiring the deceased’s consent for digital resurrection might be one way to protect their rights.”

There is a link to the study in the press release above but this includes a citation, of sorts,

Digital Cloning of the Dead: Exploring the Optimal Default Rule by Masaki Iwasaki. Asian Journal of Law and Economics DOI: https://doi.org/10.1515/ajle-2023-0125 Published Online: 2023-12-27

This paper is open access.

How can ballet performances become more accessible? Put a sensor suit on the dancers*

Take a look,

While this December 20, 2023 news item on phys.org is oriented to Christmas, it applies to much more,

Throughout the festive season, countless individuals delight in the enchantment of ballet spectacles such as “The Nutcracker.” Though the stories of timeless performances are widely known, general audiences often miss the subtle narratives and emotions dancers seek to convey through body movements—and they miss even more when the narratives are not based on well-known stories.

This prompts the question: how can dance performances become more accessible for people who are not specialists? [emphasis mine]

Researchers think they have the answer, which involves putting dancers in sensor suits.

Putting dancers into sensor suits would not have been my first answer to that question.

A December 20, 2023 Loughborough University (UK) press release, which originated the news item, describes the international research project, the Kinesemiotic Body, and its sensor suits Note: A link has been removed,

Loughborough University academics are working with the English National Ballet and the University of Bremen [Germany] to develop software that will allow people to understand the deeper meanings of performances by watching annotated CGI [computer-generated imagery] videos of different dances.

Leading this endeavour is former professional ballerina Dr Arianna Maiorani, an expert in ‘Kinesemiotics – the study of meaning conveyed through movement – and the creator of the ‘Functional Grammar of Dance’ (FGD), a model that deciphers meaning from dance movements.

Dr Maiorani believes the FGD – which is informed by linguistics and semiotics (the study of sign-based communication) theories – can help create visualisations of ‘projections’ happening during dance performances to help people understand what the dance means.

“Projections are like speech bubbles made by movement”, explains Dr Maiorani, “They are used by dancers to convey messages and involve extending body parts towards significant areas within the performance space.

“For example, a dancer is moving towards a lake, painted on the backdrop of a stage. They extend an arm forward towards the lake and a leg backwards towards a stage prop representing a shed. The extended arm means they are going to lake, while the leg means they are coming from shed.

“Using the Functional Grammar of Dance, we can annotate dances –filling the projection speech bubbles with meaning that people can understand without having background knowledge of dance.”

Dr Maiorani and a team of computer science and technology experts – including Loughborough’s Professor Massimiliano Zecca, Dr Russell Lock, and Dr Chun Liu – have been creating CGI videos of English National Ballet dancers to use with the FGD.

This involved getting dancers – including First Soloist Junor Souza and First Artist Rebecca Blenkinsop – to perform individual movements and phrases while wearing sensors on their head, torso, and limbs.

Using the FGD, they decoded the conveyed meanings behind different movements and annotated the CGI videos accordingly.

The researchers are now investigating how these videos can facilitate engagement for audiences with varying levels of dance familiarity, aiming to eventually transform this research into software for the general public.

Of the ultimate goal for the research, Dr Maiorani said: “We hope that our work will improve our understanding of how we all communicate with our body movement, and that this will bring more people closer to the art of ballet.”

The Loughborough team worked with experts from the University of Bremen including Professor John Bateman and Ms Dayana Markhabayeva, and experts from English National Ballet. The research was funded by the AHRC-DFG and supported by the LU Institute of Advanced Studies.

They are also looking at how the FDG can be used in performance and circus studies, as well as analysing character movements within video games to determine any gender biases.

You can find the Kinesemiotic Body here, where you’ll find this academic project description, Note: Links have been removed,

The Kinesemiotic Body is a joint research project funded by Deutsche Forschungsgemeinschaft (DFG) and  Arts & Humanities Research Council  (AHRC) in cooperation with the English National Ballet (ENB). The project brings together an interdisciplinary group of researchers with the aim of evaluating whether a description of dance discourse informed by multimodal discourse analysis and visualised through enriched videos can capture the way dance communicates through a flow of choreographed sequences in space, and whether this description can support the interpretative process of nonexpert audiences. The theoretical framework of the research project is based on an extended dynamic theory called segmented discourse representation theory (SDRT) and on the Functional Grammar of Dance Movement created by Project Investigator Maiorani. Project’s long-term goal is to develop an interdisciplinary area of research focusing on movement-based communication that can extend beyond the study of dance to other movement-based forms of communication and performance and foster the creation of partnerships between the academia and the institutions that host and promote such disciplines.

It’s been a while since I’ve had a piece that touches on multimodal discourse.

*March 20, 2024 1630: Head changed from “How can ballet performances become more accessible? Put on a sensor suit on the dancers*” to “How can ballet performances become more accessible? Put a sensor suit on the dancers”

Brainlike transistor and human intelligence

This brainlike transistor (not a memristor) is important because it functions at room temperature as opposed to others, which require cryogenic temperatures.

A December 20, 2023 Northwestern University news release (received via email; also on EurekAlert) fills in the details,

  • Researchers develop transistor that simultaneously processes and stores information like the human brain
  • Transistor goes beyond categorization tasks to perform associative learning
  • Transistor identified similar patterns, even when given imperfect input
  • Previous similar devices could only operate at cryogenic temperatures; new transistor operates at room temperature, making it more practical

EVANSTON, Ill. — Taking inspiration from the human brain, researchers have developed a new synaptic transistor capable of higher-level thinking.

Designed by researchers at Northwestern University, Boston College and the Massachusetts Institute of Technology (MIT), the device simultaneously processes and stores information just like the human brain. In new experiments, the researchers demonstrated that the transistor goes beyond simple machine-learning tasks to categorize data and is capable of performing associative learning.

Although previous studies have leveraged similar strategies to develop brain-like computing devices, those transistors cannot function outside cryogenic temperatures. The new device, by contrast, is stable at room temperatures. It also operates at fast speeds, consumes very little energy and retains stored information even when power is removed, making it ideal for real-world applications.

The study was published today (Dec. 20 [2023]) in the journal Nature.

“The brain has a fundamentally different architecture than a digital computer,” said Northwestern’s Mark C. Hersam, who co-led the research. “In a digital computer, data move back and forth between a microprocessor and memory, which consumes a lot of energy and creates a bottleneck when attempting to perform multiple tasks at the same time. On the other hand, in the brain, memory and information processing are co-located and fully integrated, resulting in orders of magnitude higher energy efficiency. Our synaptic transistor similarly achieves concurrent memory and information processing functionality to more faithfully mimic the brain.”

Hersam is the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern’s McCormick School of Engineering. He also is chair of the department of materials science and engineering, director of the Materials Research Science and Engineering Center and member of the International Institute for Nanotechnology. Hersam co-led the research with Qiong Ma of Boston College and Pablo Jarillo-Herrero of MIT.

Recent advances in artificial intelligence (AI) have motivated researchers to develop computers that operate more like the human brain. Conventional, digital computing systems have separate processing and storage units, causing data-intensive tasks to devour large amounts of energy. With smart devices continuously collecting vast quantities of data, researchers are scrambling to uncover new ways to process it all without consuming an increasing amount of power. Currently, the memory resistor, or “memristor,” is the most well-developed technology that can perform combined processing and memory function. But memristors still suffer from energy costly switching.

“For several decades, the paradigm in electronics has been to build everything out of transistors and use the same silicon architecture,” Hersam said. “Significant progress has been made by simply packing more and more transistors into integrated circuits. You cannot deny the success of that strategy, but it comes at the cost of high power consumption, especially in the current era of big data where digital computing is on track to overwhelm the grid. We have to rethink computing hardware, especially for AI and machine-learning tasks.”

To rethink this paradigm, Hersam and his team explored new advances in the physics of moiré patterns, a type of geometrical design that arises when two patterns are layered on top of one another. When two-dimensional materials are stacked, new properties emerge that do not exist in one layer alone. And when those layers are twisted to form a moiré pattern, unprecedented tunability of electronic properties becomes possible.

For the new device, the researchers combined two different types of atomically thin materials: bilayer graphene and hexagonal boron nitride. When stacked and purposefully twisted, the materials formed a moiré pattern. By rotating one layer relative to the other, the researchers could achieve different electronic properties in each graphene layer even though they are separated by only atomic-scale dimensions. With the right choice of twist, researchers harnessed moiré physics for neuromorphic functionality at room temperature.

“With twist as a new design parameter, the number of permutations is vast,” Hersam said. “Graphene and hexagonal boron nitride are very similar structurally but just different enough that you get exceptionally strong moiré effects.”

To test the transistor, Hersam and his team trained it to recognize similar — but not identical — patterns. Just earlier this month, Hersam introduced a new nanoelectronic device capable of analyzing and categorizing data in an energy-efficient manner, but his new synaptic transistor takes machine learning and AI one leap further.

“If AI is meant to mimic human thought, one of the lowest-level tasks would be to classify data, which is simply sorting into bins,” Hersam said. “Our goal is to advance AI technology in the direction of higher-level thinking. Real-world conditions are often more complicated than current AI algorithms can handle, so we tested our new devices under more complicated conditions to verify their advanced capabilities.”

First the researchers showed the device one pattern: 000 (three zeros in a row). Then, they asked the AI to identify similar patterns, such as 111 or 101. “If we trained it to detect 000 and then gave it 111 and 101, it knows 111 is more similar to 000 than 101,” Hersam explained. “000 and 111 are not exactly the same, but both are three digits in a row. Recognizing that similarity is a higher-level form of cognition known as associative learning.”

In experiments, the new synaptic transistor successfully recognized similar patterns, displaying its associative memory. Even when the researchers threw curveballs — like giving it incomplete patterns — it still successfully demonstrated associative learning.

“Current AI can be easy to confuse, which can cause major problems in certain contexts,” Hersam said. “Imagine if you are using a self-driving vehicle, and the weather conditions deteriorate. The vehicle might not be able to interpret the more complicated sensor data as well as a human driver could. But even when we gave our transistor imperfect input, it could still identify the correct response.”

The study, “Moiré synaptic transistor with room-temperature neuromorphic functionality,” was primarily supported by the National Science Foundation.

Here’s a link to and a citation for the paper,

Moiré synaptic transistor with room-temperature neuromorphic functionality by Xiaodong Yan, Zhiren Zheng, Vinod K. Sangwan, Justin H. Qian, Xueqiao Wang, Stephanie E. Liu, Kenji Watanabe, Takashi Taniguchi, Su-Yang Xu, Pablo Jarillo-Herrero, Qiong Ma & Mark C. Hersam. Nature volume 624, pages 551–556 (2023) DOI: https://doi.org/10.1038/s41586-023-06791-1 Published online: 20 December 2023 Issue Date: 21 December 2023

This paper is behind a paywall.

Striking similarity between memory processing of artificial intelligence (AI) models and hippocampus of the human brain

A December 18, 2023 news item on ScienceDaily shifted my focus from hardware to software when considering memory in brainlike (neuromorphic) computing,

An interdisciplinary team consisting of researchers from the Center for Cognition and Sociality and the Data Science Group within the Institute for Basic Science (IBS) [Korea] revealed a striking similarity between the memory processing of artificial intelligence (AI) models and the hippocampus of the human brain. This new finding provides a novel perspective on memory consolidation, which is a process that transforms short-term memories into long-term ones, in AI systems.

A November 28 (?), 2023 IBS press release (also on EurekAlert but published December 18, 2023, which originated the news item, describes how the team went about its research,

In the race towards developing Artificial General Intelligence (AGI), with influential entities like OpenAI and Google DeepMind leading the way, understanding and replicating human-like intelligence has become an important research interest. Central to these technological advancements is the Transformer model [Figure 1], whose fundamental principles are now being explored in new depth.

The key to powerful AI systems is grasping how they learn and remember information. The team applied principles of human brain learning, specifically concentrating on memory consolidation through the NMDA receptor in the hippocampus, to AI models.

The NMDA receptor is like a smart door in your brain that facilitates learning and memory formation. When a brain chemical called glutamate is present, the nerve cell undergoes excitation. On the other hand, a magnesium ion acts as a small gatekeeper blocking the door. Only when this ionic gatekeeper steps aside, substances are allowed to flow into the cell. This is the process that allows the brain to create and keep memories, and the gatekeeper’s (the magnesium ion) role in the whole process is quite specific.

The team made a fascinating discovery: the Transformer model seems to use a gatekeeping process similar to the brain’s NMDA receptor [see Figure 1]. This revelation led the researchers to investigate if the Transformer’s memory consolidation can be controlled by a mechanism similar to the NMDA receptor’s gating process.

In the animal brain, a low magnesium level is known to weaken memory function. The researchers found that long-term memory in Transformer can be improved by mimicking the NMDA receptor. Just like in the brain, where changing magnesium levels affect memory strength, tweaking the Transformer’s parameters to reflect the gating action of the NMDA receptor led to enhanced memory in the AI model. This breakthrough finding suggests that how AI models learn can be explained with established knowledge in neuroscience.

C. Justin LEE, who is a neuroscientist director at the institute, said, “This research makes a crucial step in advancing AI and neuroscience. It allows us to delve deeper into the brain’s operating principles and develop more advanced AI systems based on these insights.”

CHA Meeyoung, who is a data scientist in the team and at KAIST [Korea Advanced Institute of Science and Technology], notes, “The human brain is remarkable in how it operates with minimal energy, unlike the large AI models that need immense resources. Our work opens up new possibilities for low-cost, high-performance AI systems that learn and remember information like humans.”

What sets this study apart is its initiative to incorporate brain-inspired nonlinearity into an AI construct, signifying a significant advancement in simulating human-like memory consolidation. The convergence of human cognitive mechanisms and AI design not only holds promise for creating low-cost, high-performance AI systems but also provides valuable insights into the workings of the brain through AI models.

Fig. 1: (a) Diagram illustrating the ion channel activity in post-synaptic neurons. AMPA receptors are involved in the activation of post-synaptic neurons, while NMDA receptors are blocked by magnesium ions (Mg²⁺) but induce synaptic plasticity through the influx of calcium ions (Ca²⁺) when the post-synaptic neuron is sufficiently activated. (b) Flow diagram representing the computational process within the Transformer AI model. Information is processed sequentially through stages such as feed-forward layers, layer normalization, and self-attention layers. The graph depicting the current-voltage relationship of the NMDA receptors is very similar to the nonlinearity of the feed-forward layer. The input-output graph, based on the concentration of magnesium (α), shows the changes in the nonlinearity of the NMDA receptors. Courtesy: IBS

This research was presented at the 37th Conference on Neural Information Processing Systems (NeurIPS 2023) before being published in the proceedings, I found a PDF of the presentation and an early online copy of the paper before locating the paper in the published proceedings.

PDF of presentation: Transformer as a hippocampal memory consolidation model based on NMDAR-inspired nonlinearity

PDF copy of paper:

Transformer as a hippocampal memory consolidation model based on NMDAR-inspired nonlinearity by Dong-Kyum Kim, Jea Kwon, Meeyoung Cha, C. Justin Lee.

This paper was made available on OpenReview.net:

OpenReview is a platform for open peer review, open publishing, open access, open discussion, open recommendations, open directory, open API and open source.

It’s not clear to me if this paper is finalized or not and I don’t know if its presence on OpenReview constitutes publication.

Finally, the paper published in the proceedings,

Transformer as a hippocampal memory consolidation model based on NMDAR-inspired nonlinearity by Dong Kyum Kim, Jea Kwon, Meeyoung Cha, C. Justin Lee. Part of Advances in Neural Information Processing Systems 36 (NeurIPS 2023) Main Conference Track

This link will take you to the abstract, access the paper by clicking on the Paper tab.