Tag Archives: breast cancer

Robot radiologists (artificially intelligent doctors)

Mutaz Musa, a physician at New York Presbyterian Hospital/Weill Cornell (Department of Emergency Medicine) and software developer in New York City, has penned an eyeopening opinion piece about artificial intelligence (or robots if you prefer) and the field of radiology. From a June 25, 2018 opinion piece for The Scientist (Note: Links have been removed),

Although artificial intelligence has raised fears of job loss for many, we doctors have thus far enjoyed a smug sense of security. There are signs, however, that the first wave of AI-driven redundancies among doctors is fast approaching. And radiologists seem to be first on the chopping block.

Andrew Ng, founder of online learning platform Coursera and former CTO of “China’s Google,” Baidu, recently announced the development of CheXNet, a convolutional neural net capable of recognizing pneumonia and other thoracic pathologies on chest X-rays better than human radiologists. Earlier this year, a Hungarian group developed a similar system for detecting and classifying features of breast cancer in mammograms. In 2017, Adelaide University researchers published details of a bot capable of matching human radiologist performance in detecting hip fractures. And, of course, Google achieved superhuman proficiency in detecting diabetic retinopathy in fundus photographs, a task outside the scope of most radiologists.

Beyond single, two-dimensional radiographs, a team at Oxford University developed a system for detecting spinal disease from MRI data with a performance equivalent to a human radiologist. Meanwhile, researchers at the University of California, Los Angeles, reported detecting pathology on head CT scans with an error rate more than 20 times lower than a human radiologist.

Although these particular projects are still in the research phase and far from perfect—for instance, often pitting their machines against a limited number of radiologists—the pace of progress alone is telling.

Others have already taken their algorithms out of the lab and into the marketplace. Enlitic, founded by Aussie serial entrepreneur and University of San Francisco researcher Jeremy Howard, is a Bay-Area startup that offers automated X-ray and chest CAT scan interpretation services. Enlitic’s systems putatively can judge the malignancy of nodules up to 50 percent more accurately than a panel of radiologists and identify fractures so small they’d typically be missed by the human eye. One of Enlitic’s largest investors, Capitol Health, owns a network of diagnostic imaging centers throughout Australia, anticipating the broad rollout of this technology. Another Bay-Area startup, Arterys, offers cloud-based medical imaging diagnostics. Arterys’s services extend beyond plain films to cardiac MRIs and CAT scans of the chest and abdomen. And there are many others.

Musa has offered a compelling argument with lots of links to supporting evidence.

[downloaded from https://www.the-scientist.com/news-opinion/opinion–rise-of-the-robot-radiologists-64356]

And evidence keeps mounting, I just stumbled across this June 30, 2018 news item on Xinhuanet.com,

An artificial intelligence (AI) system scored 2:0 against elite human physicians Saturday in two rounds of competitions in diagnosing brain tumors and predicting hematoma expansion in Beijing.

The BioMind AI system, developed by the Artificial Intelligence Research Centre for Neurological Disorders at the Beijing Tiantan Hospital and a research team from the Capital Medical University, made correct diagnoses in 87 percent of 225 cases in about 15 minutes, while a team of 15 senior doctors only achieved 66-percent accuracy.

The AI also gave correct predictions in 83 percent of brain hematoma expansion cases, outperforming the 63-percent accuracy among a group of physicians from renowned hospitals across the country.

The outcomes for human physicians were quite normal and even better than the average accuracy in ordinary hospitals, said Gao Peiyi, head of the radiology department at Tiantan Hospital, a leading institution on neurology and neurosurgery.

To train the AI, developers fed it tens of thousands of images of nervous system-related diseases that the Tiantan Hospital has archived over the past 10 years, making it capable of diagnosing common neurological diseases such as meningioma and glioma with an accuracy rate of over 90 percent, comparable to that of a senior doctor.

All the cases were real and contributed by the hospital, but never used as training material for the AI, according to the organizer.

Wang Yongjun, executive vice president of the Tiantan Hospital, said that he personally did not care very much about who won, because the contest was never intended to pit humans against technology but to help doctors learn and improve [emphasis mine] through interactions with technology.

“I hope through this competition, doctors can experience the power of artificial intelligence. This is especially so for some doctors who are skeptical about artificial intelligence. I hope they can further understand AI and eliminate their fears toward it,” said Wang.

Dr. Lin Yi who participated and lost in the second round, said that she welcomes AI, as it is not a threat but a “friend.” [emphasis mine]

AI will not only reduce the workload but also push doctors to keep learning and improve their skills, said Lin.

Bian Xiuwu, an academician with the Chinese Academy of Science and a member of the competition’s jury, said there has never been an absolute standard correct answer in diagnosing developing diseases, and the AI would only serve as an assistant to doctors in giving preliminary results. [emphasis mine]

Dr. Paul Parizel, former president of the European Society of Radiology and another member of the jury, also agreed that AI will not replace doctors, but will instead function similar to how GPS does for drivers. [emphasis mine]

Dr. Gauden Galea, representative of the World Health Organization in China, said AI is an exciting tool for healthcare but still in the primitive stages.

Based on the size of its population and the huge volume of accessible digital medical data, China has a unique advantage in developing medical AI, according to Galea.

China has introduced a series of plans in developing AI applications in recent years.

In 2017, the State Council issued a development plan on the new generation of Artificial Intelligence and the Ministry of Industry and Information Technology also issued the “Three-Year Action Plan for Promoting the Development of a New Generation of Artificial Intelligence (2018-2020).”

The Action Plan proposed developing medical image-assisted diagnostic systems to support medicine in various fields.

I note the reference to cars and global positioning systems (GPS) and their role as ‘helpers’;, it seems no one at the ‘AI and radiology’ competition has heard of driverless cars. Here’s Musa on those reassuring comments abut how the technology won’t replace experts but rather augment their skills,

To be sure, these services frame themselves as “support products” that “make doctors faster,” rather than replacements that make doctors redundant. This language may reflect a reserved view of the technology, though it likely also represents a marketing strategy keen to avoid threatening or antagonizing incumbents. After all, many of the customers themselves, for now, are radiologists.

Radiology isn’t the only area where experts might find themselves displaced.

Eye experts

It seems inroads have been made by artificial intelligence systems (AI) into the diagnosis of eye diseases. It got the ‘Fast Company’ treatment (exciting new tech, learn all about it) as can be seen further down in this posting. First, here’s a more restrained announcement, from an August 14, 2018 news item on phys.org (Note: A link has been removed),

An artificial intelligence (AI) system, which can recommend the correct referral decision for more than 50 eye diseases, as accurately as experts has been developed by Moorfields Eye Hospital NHS Foundation Trust, DeepMind Health and UCL [University College London].

The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.

Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.

An August 13, 2018 UCL press release, which originated the news item, describes the research and the reasons behind it in more detail,

More than 285 million people worldwide live with some form of sight loss, including more than two million people in the UK. Eye diseases remain one of the biggest causes of sight loss, and many can be prevented with early detection and treatment.

Dr Pearse Keane, NIHR Clinician Scientist at the UCL Institute of Ophthalmology and consultant ophthalmologist at Moorfields Eye Hospital NHS Foundation Trust said: “The number of eye scans we’re performing is growing at a pace much faster than human experts are able to interpret them. There is a risk that this may cause delays in the diagnosis and treatment of sight-threatening diseases, which can be devastating for patients.”

“The AI technology we’re developing is designed to prioritise patients who need to be seen and treated urgently by a doctor or eye care professional. If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight. With further research it could lead to greater consistency and quality of care for patients with eye problems in the future.”

The study, launched in 2016, brought together leading NHS eye health professionals and scientists from UCL and the National Institute for Health Research (NIHR) with some of the UK’s top technologists at DeepMind to investigate whether AI technology could help improve the care of patients with sight-threatening diseases, such as age-related macular degeneration and diabetic eye disease.

Using two types of neural network – mathematical systems for identifying patterns in images or data – the AI system quickly learnt to identify 10 features of eye disease from highly complex optical coherence tomography (OCT) scans. The system was then able to recommend a referral decision based on the most urgent conditions detected.

To establish whether the AI system was making correct referrals, clinicians also viewed the same OCT scans and made their own referral decisions. The study concluded that AI was able to make the right referral recommendation more than 94% of the time, matching the performance of expert clinicians.

The AI has been developed with two unique features which maximise its potential use in eye care. Firstly, the system can provide information that helps explain to eye care professionals how it arrives at its recommendations. This information includes visuals of the features of eye disease it has identified on the OCT scan and the level of confidence the system has in its recommendations, in the form of a percentage. This functionality is crucial in helping clinicians scrutinise the technology’s recommendations and check its accuracy before deciding the type of care and treatment a patient receives.

Secondly, the AI system can be easily applied to different types of eye scanner, not just the specific model on which it was trained. This could significantly increase the number of people who benefit from this technology and future-proof it, so it can still be used even as OCT scanners are upgraded or replaced over time.

The next step is for the research to go through clinical trials to explore how this technology might improve patient care in practice, and regulatory approval before it can be used in hospitals and other clinical settings.

If clinical trials are successful in demonstrating that the technology can be used safely and effectively, Moorfields will be able to use an eventual, regulatory-approved product for free, across all 30 of their UK hospitals and community clinics, for an initial period of five years.

The work that has gone into this project will also help accelerate wider NHS research for many years to come. For example, DeepMind has invested significant resources to clean, curate and label Moorfields’ de-identified research dataset to create one of the most advanced eye research databases in the world.

Moorfields owns this database as a non-commercial public asset, which is already forming the basis of nine separate medical research studies. In addition, Moorfields can also use DeepMind’s trained AI model for future non-commercial research efforts, which could help advance medical research even further.

Mustafa Suleyman, Co-founder and Head of Applied AI at DeepMind Health, said: “We set up DeepMind Health because we believe artificial intelligence can help solve some of society’s biggest health challenges, like avoidable sight loss, which affects millions of people across the globe. These incredibly exciting results take us one step closer to that goal and could, in time, transform the diagnosis, treatment and management of patients with sight threatening eye conditions, not just at Moorfields, but around the world.”

Professor Sir Peng Tee Khaw, director of the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology said: “The results of this pioneering research with DeepMind are very exciting and demonstrate the potential sight-saving impact AI could have for patients. I am in no doubt that AI has a vital role to play in the future of healthcare, particularly when it comes to training and helping medical professionals so that patients benefit from vital treatment earlier than might previously have been possible. This shows the transformative research than can be carried out in the UK combining world leading industry and NIHR/NHS hospital/university partnerships.”

Matt Hancock, Health and Social Care Secretary, said: “This is hugely exciting and exactly the type of technology which will benefit the NHS in the long term and improve patient care – that’s why we fund over a billion pounds a year in health research as part of our long term plan for the NHS.”

Here’s a link to and a citation for the study,

Clinically applicable deep learning for diagnosis and referral in retinal disease by Jeffrey De Fauw, Joseph R. Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O’Donoghue, Daniel Visentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena Chopra, Dominic King, Alan Karthikesalingam, Cían O. Hughes, Rosalind Raine, Julian Hughes, Dawn A. Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis Hassabis, Geraint Rees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane, & Olaf Ronneberger. Nature Medicine (2018) DOI: https://doi.org/10.1038/s41591-018-0107-6 Published 13 August 2018

This paper is behind a paywall.

And now, Melissa Locker’s August 15, 2018 article for Fast Company (Note: Links have been removed),

In a paper published in Nature Medicine on Monday, Google’s DeepMind subsidiary, UCL, and researchers at Moorfields Eye Hospital showed off their new AI system. The researchers used deep learning to create algorithm-driven software that can identify common patterns in data culled from dozens of common eye diseases from 3D scans. The result is an AI that can identify more than 50 diseases with incredible accuracy and can then refer patients to a specialist. Even more important, though, is that the AI can explain why a diagnosis was made, indicating which part of the scan prompted the outcome. It’s an important step in both medicine and in making AIs slightly more human

The editor or writer has even highlighted the sentence about the system’s accuracy—not just good but incredible!

I will be publishing something soon [my August 21, 2018 posting] which highlights some of the questions one might want to ask about AI and medicine before diving headfirst into this brave new world of medicine.

Liquid biopsy chip that uses carbon nanotubes in place of microfluidics

They’re calling this a breakthrough technology in a Dec. 15, 2016 news item on ScienceDaily,

A chip developed by mechanical engineers at Worcester Polytechnic Institute (WPI) [UK] can trap and identify metastatic cancer cells in a small amount of blood drawn from a cancer patient. The breakthrough technology uses a simple mechanical method that has been shown to be more effective in trapping cancer cells than the microfluidic approach employed in many existing devices.

The WPI device uses antibodies attached to an array of carbon nanotubes at the bottom of a tiny well. Cancer cells settle to the bottom of the well, where they selectively bind to the antibodies based on their surface markers (unlike other devices, the chip can also trap tiny structures called exosomes produced by cancers cells). This “liquid biopsy,” described in a recent issue of the journal Nanotechnology, could become the basis of a simple lab test that could quickly detect early signs of metastasis and help physicians select treatments targeted at the specific cancer cells identified.

A Dec. 15, 2016 WPI press release (also on EurekAlert), which originated the news item, explains the breakthrough in more detail (Note: Links have been removed),

Metastasis is the process by which a cancer can spread from one organ to other parts of the body, typically by entering the bloodstream. Different types of tumors show a preference for specific organs and tissues; circulating breast cancer cells, for example, are likely to take root in bones, lungs, and the brain. The prognosis for metastatic cancer (also called stage IV cancer) is generally poor, so a technique that could detect these circulating tumor cells before they have a chance to form new colonies of tumors at distant sites could greatly increase a patient’s survival odds.

“The focus on capturing circulating tumor cells is quite new,” said Balaji Panchapakesan, associate professor of mechanical engineering at WPI and director of the Small Systems Laboratory. “It is a very difficult challenge, not unlike looking for a needle in a haystack. There are billions of red blood cells, tens of thousands of white blood cells, and, perhaps, only a small number of tumor cells floating among them. We’ve shown how those cells can be captured with high precision.”

The device developed by Panchapakesan’s team includes an array of tiny elements, each about a tenth of an inch (3 millimeters) across. Each element has a well, at the bottom of which are antibodies attached to carbon nanotubes. Each well holds a specific antibody that will bind selectively to one type of cancer cell type, based on genetic markers on its surface. By seeding elements with an assortment of antibodies, the device could be set up to capture several different cancer cells types using a single blood sample. In the lab, the researchers were able to fill a total of 170 wells using just under 0.3 fluid ounces (0.85 milliliter) of blood. Even with that small sample, they captured between one and a thousand cells per device, with a capture efficiency of between 62 and 100 percent.

In a paper published in the journal Nanotechnology [“Static micro-array isolation, dynamic time series classification, capture and enumeration of spiked breast cancer cells in blood: the nanotube–CTC chip”], Panchapakesan’s team, which includes postdoctoral researcher Farhad Khosravi, the paper’s lead author, and researchers at the University of Louisville and Thomas Jefferson University, describe a study in which antibodies specific for two markers of metastatic breast cancer, EpCam and Her2, were attached to the carbon nanotubes in the chip. When a blood sample that had been “spiked” with cells expressing those markers was placed on the chip, the device was shown to reliably capture only the marked cells.

In addition to capturing tumor cells, Panchapakesan says the chip will also latch on to tiny structures called exosomes, which are produced by cancers [sic] cells and carry the same markers. “These highly elusive 3-nanometer structures are too small to be captured with other types of liquid biopsy devices, such as microfluidics, due to shear forces that can potentially destroy them,” he noted. “Our chip is currently the only device that can potentially capture circulating tumor cells and exosomes directly on the chip, which should increase its ability to detect metastasis. This can be important because emerging evidence suggests that tiny proteins excreted with exosomes can drive reactions that may become major barriers to effective cancer drug delivery and treatment.”

Panchapakesan said the chip developed by his team has additional advantages over other liquid biopsy devices, most of which use microfluidics to capture cancer cells. In addition to being able to capture circulating tumor cells far more efficiently than microfluidic chips (in which cells must latch onto anchored antibodies as they pass by in a stream of moving liquid), the WPI device is also highly effective in separating cancer cells from the other cells and material in the blood through differential settling.

While the initial tests with the chip have focused on breast cancer, Panchapakesan says the device could be set up to detect a wide range of tumor types, and plans are already in the works for development of an advanced device as well as testing for other cancer types, including lung and pancreas cancer. He says he envisions a day when a device like his could be employed not only for regular follow ups for patients who have had cancer, but in routine cancer screening.

“Imagine going to the doctor for your yearly physical,” he said. “You have blood drawn and that one blood sample can be tested for a comprehensive array of cancer cell markers. Cancers would be caught at their earliest stage and other stages of development, and doctors would have the necessary protein or genetic information from these captured cells to customize your treatment based on the specific markers for your cancer. This would really be a way to put your health in your own hands.”

“White blood cells, in particular, are a problem, because they are quite numerous in blood and they can be mistaken for cancer cells,” he said. “Our device uses what is called a passive leukocyte depletion strategy. Because of density differences, the cancer cells tend to settle to the bottom of the wells (and this only happens in a narrow window), where they encounter the antibodies. The remainder of the blood contents stays at the top of the wells and can simply be washed away.”

Here’s a link to and a citation for the paper,

Static micro-array isolation, dynamic time series classification, capture and enumeration of spiked breast cancer cells in blood: the nanotube–CTC chip by Farhad Khosravi, Patrick J Trainor, Christopher Lambert, Goetz Kloecker, Eric Wickstrom, Shesh N Rai, and Balaji Panchapakesan. Nanotechnology, Volume 27, Number 44 DOI http://dx.doi.org/10.1088/0957-4484/27/44/44LT03 Published 29 September 2016

© 2016 IOP Publishing Ltd

This paper is open access.

Device detects molecules associated with neurodegenerative diseases

It’s nice to get notice of research in South America, an area for which I rarely stumble across any news releases. Brazilian researchers have developed a device that could help diagnose neurodegenerative diseases such as Alzheimer’s and and Parkinson’s as well as some cancers according to a May 20, 2016 news item on Nanotechnology Now,

A biosensor developed by researchers at the National Nanotechnology Laboratory (LNNano) in Campinas, São Paulo State, Brazil, has been proven capable of detecting molecules associated with neurodegenerative diseases and some types of cancer.

The device is basically a single-layer organic nanometer-scale transistor on a glass slide. It contains the reduced form of the peptide glutathione (GSH), which reacts in a specific way when it comes into contact with the enzyme glutathione S-transferase (GST), linked to Parkinson’s, Alzheimer’s and breast cancer, among other diseases. The GSH-GST reaction is detected by the transistor, which can be used for diagnostic purposes.

The project focuses on the development of point-of-care devices by researchers in a range of knowledge areas, using functional materials to produce simple sensors and microfluidic systems for rapid diagnosis.

A May 19, 2016 Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) press release, which originated the news item, provides more detail,

“Platforms like this one can be deployed to diagnose complex diseases quickly, safely and relatively cheaply, using nanometer-scale systems to identify molecules of interest in the material analyzed,” explained Carlos Cesar Bof Bufon, Head of LNNano’s Functional Devices & Systems Lab (DSF) and a member of the research team for the project, whose principal investigator is Lauro Kubota, a professor at the University of Campinas’s Chemistry Institute (IQ-UNICAMP).

In addition to portability and low cost, the advantages of the nanometric biosensor include its sensitivity in detecting molecules, according to Bufon.

“This is the first time organic transistor technology has been used in detecting the pair GSH-GST, which is important in diagnosing degenerative diseases, for example,” he explained. “The device can detect such molecules even when they’re present at very low levels in the examined material, thanks to its nanometric sensitivity.” A nanometer (nm) is one billionth of a meter (10-9 meter), or one millionth of a millimeter.

The system can be adapted to detect other substances, such as molecules linked to different diseases and elements present in contaminated material, among other applications. This requires replacing the molecules in the sensor with others that react with the chemicals targeted by the test, which are known as analytes.

The team is working on paper-based biosensors to lower the cost even further and to improve portability and facilitate fabrication as well as disposal.

The challenge is that paper is an insulator in its usual form. Bufon has developed a technique to make paper conductive and capable of transporting sensing data by impregnating cellulose fibers with polymers that have conductive properties.

The technique is based on in situ synthesis of conductive polymers. For the polymers not to remain trapped on the surface of the paper, they have to be synthesized inside and between the pores of the cellulose fibers. This is done by gas-phase chemical polymerization: a liquid oxidant is infiltrated into the paper, which is then exposed to monomers in the gas phase. A monomer is a molecule of low molecular weight capable of reacting with identical or different molecules of low molecular weight to form a polymer.

The monomers evaporate under the paper and penetrate the pores of the fibers at the submicrometer scale. Inside the pores, they blend with the oxidant and begin the polymerization process right there, impregnating the entire material.

The polymerized paper acquires the conductive properties of the polymers. This conductivity can be adjusted by manipulating the element embedded in the cellulose fibers, depending on the application for which the paper is designed. Thus, the device can be electrically conductive, allowing current to flow without significant losses, or semiconductive, interacting with specific molecules and functioning as a physical, chemical or electrochemical sensor.

There’s no mention of a published paper.