Tag Archives: medical imaging

Robot radiologists (artificially intelligent doctors)

Mutaz Musa, a physician at New York Presbyterian Hospital/Weill Cornell (Department of Emergency Medicine) and software developer in New York City, has penned an eyeopening opinion piece about artificial intelligence (or robots if you prefer) and the field of radiology. From a June 25, 2018 opinion piece for The Scientist (Note: Links have been removed),

Although artificial intelligence has raised fears of job loss for many, we doctors have thus far enjoyed a smug sense of security. There are signs, however, that the first wave of AI-driven redundancies among doctors is fast approaching. And radiologists seem to be first on the chopping block.

Andrew Ng, founder of online learning platform Coursera and former CTO of “China’s Google,” Baidu, recently announced the development of CheXNet, a convolutional neural net capable of recognizing pneumonia and other thoracic pathologies on chest X-rays better than human radiologists. Earlier this year, a Hungarian group developed a similar system for detecting and classifying features of breast cancer in mammograms. In 2017, Adelaide University researchers published details of a bot capable of matching human radiologist performance in detecting hip fractures. And, of course, Google achieved superhuman proficiency in detecting diabetic retinopathy in fundus photographs, a task outside the scope of most radiologists.

Beyond single, two-dimensional radiographs, a team at Oxford University developed a system for detecting spinal disease from MRI data with a performance equivalent to a human radiologist. Meanwhile, researchers at the University of California, Los Angeles, reported detecting pathology on head CT scans with an error rate more than 20 times lower than a human radiologist.

Although these particular projects are still in the research phase and far from perfect—for instance, often pitting their machines against a limited number of radiologists—the pace of progress alone is telling.

Others have already taken their algorithms out of the lab and into the marketplace. Enlitic, founded by Aussie serial entrepreneur and University of San Francisco researcher Jeremy Howard, is a Bay-Area startup that offers automated X-ray and chest CAT scan interpretation services. Enlitic’s systems putatively can judge the malignancy of nodules up to 50 percent more accurately than a panel of radiologists and identify fractures so small they’d typically be missed by the human eye. One of Enlitic’s largest investors, Capitol Health, owns a network of diagnostic imaging centers throughout Australia, anticipating the broad rollout of this technology. Another Bay-Area startup, Arterys, offers cloud-based medical imaging diagnostics. Arterys’s services extend beyond plain films to cardiac MRIs and CAT scans of the chest and abdomen. And there are many others.

Musa has offered a compelling argument with lots of links to supporting evidence.

[downloaded from https://www.the-scientist.com/news-opinion/opinion–rise-of-the-robot-radiologists-64356]

And evidence keeps mounting, I just stumbled across this June 30, 2018 news item on Xinhuanet.com,

An artificial intelligence (AI) system scored 2:0 against elite human physicians Saturday in two rounds of competitions in diagnosing brain tumors and predicting hematoma expansion in Beijing.

The BioMind AI system, developed by the Artificial Intelligence Research Centre for Neurological Disorders at the Beijing Tiantan Hospital and a research team from the Capital Medical University, made correct diagnoses in 87 percent of 225 cases in about 15 minutes, while a team of 15 senior doctors only achieved 66-percent accuracy.

The AI also gave correct predictions in 83 percent of brain hematoma expansion cases, outperforming the 63-percent accuracy among a group of physicians from renowned hospitals across the country.

The outcomes for human physicians were quite normal and even better than the average accuracy in ordinary hospitals, said Gao Peiyi, head of the radiology department at Tiantan Hospital, a leading institution on neurology and neurosurgery.

To train the AI, developers fed it tens of thousands of images of nervous system-related diseases that the Tiantan Hospital has archived over the past 10 years, making it capable of diagnosing common neurological diseases such as meningioma and glioma with an accuracy rate of over 90 percent, comparable to that of a senior doctor.

All the cases were real and contributed by the hospital, but never used as training material for the AI, according to the organizer.

Wang Yongjun, executive vice president of the Tiantan Hospital, said that he personally did not care very much about who won, because the contest was never intended to pit humans against technology but to help doctors learn and improve [emphasis mine] through interactions with technology.

“I hope through this competition, doctors can experience the power of artificial intelligence. This is especially so for some doctors who are skeptical about artificial intelligence. I hope they can further understand AI and eliminate their fears toward it,” said Wang.

Dr. Lin Yi who participated and lost in the second round, said that she welcomes AI, as it is not a threat but a “friend.” [emphasis mine]

AI will not only reduce the workload but also push doctors to keep learning and improve their skills, said Lin.

Bian Xiuwu, an academician with the Chinese Academy of Science and a member of the competition’s jury, said there has never been an absolute standard correct answer in diagnosing developing diseases, and the AI would only serve as an assistant to doctors in giving preliminary results. [emphasis mine]

Dr. Paul Parizel, former president of the European Society of Radiology and another member of the jury, also agreed that AI will not replace doctors, but will instead function similar to how GPS does for drivers. [emphasis mine]

Dr. Gauden Galea, representative of the World Health Organization in China, said AI is an exciting tool for healthcare but still in the primitive stages.

Based on the size of its population and the huge volume of accessible digital medical data, China has a unique advantage in developing medical AI, according to Galea.

China has introduced a series of plans in developing AI applications in recent years.

In 2017, the State Council issued a development plan on the new generation of Artificial Intelligence and the Ministry of Industry and Information Technology also issued the “Three-Year Action Plan for Promoting the Development of a New Generation of Artificial Intelligence (2018-2020).”

The Action Plan proposed developing medical image-assisted diagnostic systems to support medicine in various fields.

I note the reference to cars and global positioning systems (GPS) and their role as ‘helpers’;, it seems no one at the ‘AI and radiology’ competition has heard of driverless cars. Here’s Musa on those reassuring comments abut how the technology won’t replace experts but rather augment their skills,

To be sure, these services frame themselves as “support products” that “make doctors faster,” rather than replacements that make doctors redundant. This language may reflect a reserved view of the technology, though it likely also represents a marketing strategy keen to avoid threatening or antagonizing incumbents. After all, many of the customers themselves, for now, are radiologists.

Radiology isn’t the only area where experts might find themselves displaced.

Eye experts

It seems inroads have been made by artificial intelligence systems (AI) into the diagnosis of eye diseases. It got the ‘Fast Company’ treatment (exciting new tech, learn all about it) as can be seen further down in this posting. First, here’s a more restrained announcement, from an August 14, 2018 news item on phys.org (Note: A link has been removed),

An artificial intelligence (AI) system, which can recommend the correct referral decision for more than 50 eye diseases, as accurately as experts has been developed by Moorfields Eye Hospital NHS Foundation Trust, DeepMind Health and UCL [University College London].

The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.

Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.

An August 13, 2018 UCL press release, which originated the news item, describes the research and the reasons behind it in more detail,

More than 285 million people worldwide live with some form of sight loss, including more than two million people in the UK. Eye diseases remain one of the biggest causes of sight loss, and many can be prevented with early detection and treatment.

Dr Pearse Keane, NIHR Clinician Scientist at the UCL Institute of Ophthalmology and consultant ophthalmologist at Moorfields Eye Hospital NHS Foundation Trust said: “The number of eye scans we’re performing is growing at a pace much faster than human experts are able to interpret them. There is a risk that this may cause delays in the diagnosis and treatment of sight-threatening diseases, which can be devastating for patients.”

“The AI technology we’re developing is designed to prioritise patients who need to be seen and treated urgently by a doctor or eye care professional. If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight. With further research it could lead to greater consistency and quality of care for patients with eye problems in the future.”

The study, launched in 2016, brought together leading NHS eye health professionals and scientists from UCL and the National Institute for Health Research (NIHR) with some of the UK’s top technologists at DeepMind to investigate whether AI technology could help improve the care of patients with sight-threatening diseases, such as age-related macular degeneration and diabetic eye disease.

Using two types of neural network – mathematical systems for identifying patterns in images or data – the AI system quickly learnt to identify 10 features of eye disease from highly complex optical coherence tomography (OCT) scans. The system was then able to recommend a referral decision based on the most urgent conditions detected.

To establish whether the AI system was making correct referrals, clinicians also viewed the same OCT scans and made their own referral decisions. The study concluded that AI was able to make the right referral recommendation more than 94% of the time, matching the performance of expert clinicians.

The AI has been developed with two unique features which maximise its potential use in eye care. Firstly, the system can provide information that helps explain to eye care professionals how it arrives at its recommendations. This information includes visuals of the features of eye disease it has identified on the OCT scan and the level of confidence the system has in its recommendations, in the form of a percentage. This functionality is crucial in helping clinicians scrutinise the technology’s recommendations and check its accuracy before deciding the type of care and treatment a patient receives.

Secondly, the AI system can be easily applied to different types of eye scanner, not just the specific model on which it was trained. This could significantly increase the number of people who benefit from this technology and future-proof it, so it can still be used even as OCT scanners are upgraded or replaced over time.

The next step is for the research to go through clinical trials to explore how this technology might improve patient care in practice, and regulatory approval before it can be used in hospitals and other clinical settings.

If clinical trials are successful in demonstrating that the technology can be used safely and effectively, Moorfields will be able to use an eventual, regulatory-approved product for free, across all 30 of their UK hospitals and community clinics, for an initial period of five years.

The work that has gone into this project will also help accelerate wider NHS research for many years to come. For example, DeepMind has invested significant resources to clean, curate and label Moorfields’ de-identified research dataset to create one of the most advanced eye research databases in the world.

Moorfields owns this database as a non-commercial public asset, which is already forming the basis of nine separate medical research studies. In addition, Moorfields can also use DeepMind’s trained AI model for future non-commercial research efforts, which could help advance medical research even further.

Mustafa Suleyman, Co-founder and Head of Applied AI at DeepMind Health, said: “We set up DeepMind Health because we believe artificial intelligence can help solve some of society’s biggest health challenges, like avoidable sight loss, which affects millions of people across the globe. These incredibly exciting results take us one step closer to that goal and could, in time, transform the diagnosis, treatment and management of patients with sight threatening eye conditions, not just at Moorfields, but around the world.”

Professor Sir Peng Tee Khaw, director of the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology said: “The results of this pioneering research with DeepMind are very exciting and demonstrate the potential sight-saving impact AI could have for patients. I am in no doubt that AI has a vital role to play in the future of healthcare, particularly when it comes to training and helping medical professionals so that patients benefit from vital treatment earlier than might previously have been possible. This shows the transformative research than can be carried out in the UK combining world leading industry and NIHR/NHS hospital/university partnerships.”

Matt Hancock, Health and Social Care Secretary, said: “This is hugely exciting and exactly the type of technology which will benefit the NHS in the long term and improve patient care – that’s why we fund over a billion pounds a year in health research as part of our long term plan for the NHS.”

Here’s a link to and a citation for the study,

Clinically applicable deep learning for diagnosis and referral in retinal disease by Jeffrey De Fauw, Joseph R. Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O’Donoghue, Daniel Visentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena Chopra, Dominic King, Alan Karthikesalingam, Cían O. Hughes, Rosalind Raine, Julian Hughes, Dawn A. Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis Hassabis, Geraint Rees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane, & Olaf Ronneberger. Nature Medicine (2018) DOI: https://doi.org/10.1038/s41591-018-0107-6 Published 13 August 2018

This paper is behind a paywall.

And now, Melissa Locker’s August 15, 2018 article for Fast Company (Note: Links have been removed),

In a paper published in Nature Medicine on Monday, Google’s DeepMind subsidiary, UCL, and researchers at Moorfields Eye Hospital showed off their new AI system. The researchers used deep learning to create algorithm-driven software that can identify common patterns in data culled from dozens of common eye diseases from 3D scans. The result is an AI that can identify more than 50 diseases with incredible accuracy and can then refer patients to a specialist. Even more important, though, is that the AI can explain why a diagnosis was made, indicating which part of the scan prompted the outcome. It’s an important step in both medicine and in making AIs slightly more human

The editor or writer has even highlighted the sentence about the system’s accuracy—not just good but incredible!

I will be publishing something soon [my August 21, 2018 posting] which highlights some of the questions one might want to ask about AI and medicine before diving headfirst into this brave new world of medicine.

‘Smart’ windows from Australia

My obsession with smart windows has been lying dormant until now. This February 25, 2018 RMIT University (Australia) press release on EurekAlert has reawkened it,

Researchers from RMIT University in Melbourne Australia have developed a new ultra-thin coating that responds to heat and cold, opening the door to “smart windows”.

The self-modifying coating, which is a thousand times thinner than a human hair, works by automatically letting in more heat when it’s cold and blocking the sun’s rays when it’s hot.

Smart windows have the ability to naturally regulate temperatures inside a building, leading to major environmental benefits and significant financial savings.

Lead investigator Associate Professor Madhu Bhaskaran said the breakthrough will help meet future energy needs and create temperature-responsive buildings.

“We are making it possible to manufacture smart windows that block heat during summer and retain heat inside when the weather cools,” Bhaskaran said.

“We lose most of our energy in buildings through windows. This makes maintaining buildings at a certain temperature a very wasteful and unavoidable process.

“Our technology will potentially cut the rising costs of air-conditioning and heating, as well as dramatically reduce the carbon footprint of buildings of all sizes.

“Solutions to our energy crisis do not come only from using renewables; smarter technology that eliminates energy waste is absolutely vital.”

Smart glass windows are about 70 per cent more energy efficient during summer and 45 per cent more efficient in the winter compared to standard dual-pane glass.

New York’s Empire State Building reported energy savings of US$2.4 million and cut carbon emissions by 4,000 metric tonnes after installing smart glass windows. This was using a less effective form of technology.

“The Empire State Building used glass that still required some energy to operate,” Bhaskaran said. “Our coating doesn’t require energy and responds directly to changes in temperature.”

Co-researcher and PhD student Mohammad Taha said that while the coating reacts to temperature it can also be overridden with a simple switch.

“This switch is similar to a dimmer and can be used to control the level of transparency on the window and therefore the intensity of lighting in a room,” Taha said. “This means users have total freedom to operate the smart windows on-demand.”

Windows aren’t the only clear winners when it comes to the new coating. The technology can also be used to control non-harmful radiation that can penetrate plastics and fabrics. This could be applied to medical imaging and security scans.

Bhaskaran said that the team was looking to roll the technology out as soon as possible.

“The materials and technology are readily scalable to large area surfaces, with the underlying technology filed as a patent in Australia and the US,” she said.

The research has been carried out at RMIT University’s state-of-the-art Micro Nano Research Facility with colleagues at the University of Adelaide and supported by the Australian Research Council.

How the coating works

The self-regulating coating is created using a material called vanadium dioxide. The coating is 50-150 nanometres in thickness.

At 67 degrees Celsius, vanadium dioxide transforms from being an insulator into a metal, allowing the coating to turn into a versatile optoelectronic material controlled by and sensitive to light.

The coating stays transparent and clear to the human eye but goes opaque to infra-red solar radiation, which humans cannot see and is what causes sun-induced heating.

Until now, it has been impossible to use vanadium dioxide on surfaces of various sizes because the placement of the coating requires the creation of specialised layers, or platforms.

The RMIT researchers have developed a way to create and deposit the ultra-thin coating without the need for these special platforms – meaning it can be directly applied to surfaces like glass windows.

Here’s a link to and a citation for the paper,

Insulator–metal transition in substrate-independent VO2 thin film for phase-change device by Mohammad Taha, Sumeet Walia, Taimur Ahmed, Daniel Headland, Withawat Withayachumnankul, Sharath Sriram, & Madhu Bhaskaran. Scientific Reportsvolume 7, Article number: 17899 (2017) doi:10.1038/s41598-017-17937-3 Published online: 20 December 2017

This paper is open access.

For anyone interested in more ‘smart’ windows, you can try that search term or ‘electrochromic’, ‘photochromic’, and ‘thermochromic’ , as well.

Drink your spinach juice—illuminate your guts

Contrast agents used for magnetic resonance imaging, x-ray imaging, ultrasounds, and other imaging technologies are not always kind to the humans ingesting them. So, scientists at the University at Buffalo (also known as the State University of New York at Buffalo) have developed a veggie juice that does the job according to a July 11, 2016 news item on Nanowerk (Note: A link has been removed),

The pigment that gives spinach and other plants their verdant color may improve doctors’ ability to examine the human gastrointestinal tract.

That’s according to a study, published in the journal Advanced Materials (“Surfactant-Stripped Frozen Pheophytin Micelles for Multimodal Gut Imaging”), which describes how chlorophyll-based nanoparticles suspended in liquid are an effective imaging agent for the gut.

The University of Buffalo has provided an illustration of the work,

A new UB-led study suggests that chlorophyll-based nanoparticles are an effective imaging agent for the gut. The medical imaging drink, developed to diagnose and treat gastrointestinal illnesses, is made of concentrated chlorophyll, the pigment that makes spinach green. Photo illustration credit: University at Buffalo.

A new UB-led study suggests that chlorophyll-based nanoparticles are an effective imaging agent for the gut. The medical imaging drink, developed to diagnose and treat gastrointestinal illnesses, is made of concentrated chlorophyll, the pigment that makes spinach green. Photo illustration credit: University at Buffalo.

A July 11, 2016 University at Buffalo (UB) news release (also on EurekAlert) by Cory Nealon, which originated the news item, expands on the theme,

“Our work suggests that this spinach-like, nanoparticle juice can help doctors get a better look at what’s happening inside the stomach, intestines and other areas of the GI tract,” says Jonathan Lovell, PhD, assistant professor in the Department of Biomedical Engineering, a joint program between UB’s School of Engineering and Applied Sciences and the Jacobs School of Medicine and Biomedical Sciences at UB, and the study’s corresponding author.

To examine the gastrointestinal tract, doctors typically use X-rays, magnetic resonance imaging or ultrasounds, but these techniques are limited with respect to safety, accessibility and lack of adequate contrast, respectively.

Doctors also perform endoscopies, in which a tiny camera attached to a thin tube is inserted into the patient’s body. While effective, this procedure is challenging to perform in the small intestine, and it can cause infections, tears and pose other risks.

The new study, which builds upon Lovell’s previous medical imaging research, is a collaboration between researchers at UB and the University of Wisconsin-Madison. It focuses on Chlorophyll a, a pigment found in spinach and other green vegetables that is essential to photosynthesis.

In the laboratory, researchers removed magnesium from Chlorophyll a, a process which alters the pigment’s chemical structure to form another edible compound called pheophytin. Pheophytin plays an important role in photosynthesis, acting as a gatekeeper that allows electrons from sunlight to enter plants.

Next, they dissolved pheophytin in a solution of soapy substances known as surfactants. The researchers were then able to remove nearly all of the surfactants, leaving nearly pure pheophytin nanoparticles.

The drink, when tested in mice, provided imaging of the gut in three modes: photoacoustic imaging, fluorescence imaging and positron emission tomography (PET). (For PET, the researchers added to the drink Copper-64, an isotope of the metal that, in small amounts, is harmless to the human body.)

Additional studies are needed, but the drink has commercial potential because it:

·         Works in different imaging techniques.

·         Moves stably through the gut.

·         And is naturally consumed in the human diet already.

In lab tests, mice excreted 100 percent of the drink in photoacoustic and fluorescence imaging, and nearly 93 percent after the PET test.

“The veggie juice allows for techniques that are not commonly used today by doctors for imaging the gut like photoacoustic, PET, and fluorescence,” Lovell says. “And part of the appeal is the safety of the juice.”

Here’s a link to and a citation for the paper,

Surfactant-Stripped Frozen Pheophytin Micelles for Multimodal Gut Imaging by Yumiao Zhang, Depeng Wang, Shreya Goel, Boyang Sun, Upendra Chitgupi, Jumin Geng, Haiyan Sun, Todd E. Barnhart, Weibo Cai, Jun Xia, and Jonathan F. Lovell. Advanced Materials DOI: 10.1002/adma.201602373 Version of Record online: 11 JUL 2016

© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Gold nanorods and mucus

Mucus can kill. Most of us are lucky enough to produce mucus appropriate for our bodies’ needs but people who have cystic fibrosis and other kinds of lung disease suffer greatly from mucus that is too thick to pass easily through the body. An Oct. 9, 2014 Optical Society of America (OSA) news release (also on EurekAlert) ‘shines’ a light on the topic of mucus and viscosity,

Some people might consider mucus an icky bodily secretion best left wrapped in a tissue, but to a group of researchers from the University of North Carolina at Chapel Hill, snot is an endlessly fascinating subject. The team has developed a way to use gold nanoparticles and light to measure the stickiness of the slimy substance that lines our airways.  The new method could help doctors better monitor and treat lung diseases such as cystic fibrosis and chronic obstructive pulmonary disease.

“People who are suffering from certain lung diseases have thickened mucus,” explained Amy Oldenburg, a physicist at the University of North Carolina at Chapel Hill whose research focuses on biomedical imaging systems. “In healthy adults, hair-like cell appendages called cilia line the airways and pull mucus out of the lungs and into the throat. But if the mucus is too viscous it can become trapped in the lungs, making breathing more difficult and also failing to remove pathogens that can cause chronic infections.”

Doctors can prescribe mucus-thinning drugs, but have no good way to monitor how the drugs affect the viscosity of mucus at various spots inside the body. This is where Oldenburg and her colleagues’ work may help.

The researchers placed coated gold nanorods on the surface of mucus samples and then tracked the rods’ diffusion into the mucus by illuminating the samples with laser light and analyzing the way the light bounced off the nanoparticles. The slower the nanorods diffused, the thicker the mucus. The team found this imaging method worked even when the mucus was sliding over a layer of cells—an important finding since mucus inside the human body is usually in motion.

“The ability to monitor how well mucus-thinning treatments are working in real-time may allow us to determine better treatments and tailor them for the individual,” said Oldenburg.

It will likely take five to 10 more years before the team’s mucus measuring method is tested on human patients, Oldenburg said. Gold is non-toxic, but for safety reasons the researchers would want to ensure that the gold nanorods would eventually be cleared from a patient’s system.

“This is a great example of interdisciplinary work in which optical scientists can meet a specific need in the clinic,” said Nozomi Nishimura, of Cornell University … . “As these types of optical technologies continue to make their way into medical practice, it will both expand the market for the technology as well as improve patient care.”

The team is also working on several lines of ongoing study that will some day help bring their monitoring device to the clinic. They are developing delivery methods for the gold nanorods, studying how their imaging system might be adapted to enter a patient’s airways, and further investigating how mucus flow properties differ throughout the body.

This work is being presented at:

The research team will present their work at The Optical Society’s (OSA) 98th Annual Meeting, Frontiers in Optics, being held Oct. 19-23 [2014] in Tucson, Arizona, USA.

Presentation FTu5F.2, “Imaging Gold Nanorod Diffusion in Mucus Using Polarization Sensitive OCT,” takes place Tuesday, Oct. 21 at 4:15 p.m. MST [Mountain Standard Time] in the Tucson Ballroom, Salon A at the JW Marriott Tucson Starr Pass Resort.

People with cystic fibrosis tend to have short lives (from the US National Library of Medicine MedLine Plus webpage on cystic fibrosis),

Most children with cystic fibrosis stay in good health until they reach adulthood. They are able to take part in most activities and attend school. Many young adults with cystic fibrosis finish college or find jobs.

Lung disease eventually worsens to the point where the person is disabled. Today, the average life span for people with CF who live to adulthood is about 37 years.

Death is most often caused by lung complications.

I hope this work proves helpful.

Nanotechnology for better treatment of eye conditions and a perspective on superhuman sight

There are three ‘eye’-related items in this piece, two of them concerning animal eyes and one concerning a camera-eye or the possibility of superhuman sight.

Earlier this week researchers at the University of Reading (UK) announced they have achieved a better understanding of how nanoparticles might be able to bypass some of the eye’s natural barriers in the hopes of making eye drops more effective in an Oct. 7, 2014 news item on Nanowerk,

Sufferers of eye disorders have new hope after researchers at the University of Reading discovered a potential way of making eye drops more effective.

Typically less than 5% of the medicine dose applied as drops actually penetrates the eye – the majority of the dose will be washed off the cornea by tear fluid and lost.

The team, led by Professor Vitaliy Khutoryanskiy, has developed novel nanoparticles that could attach to the cornea and resist the wash out effect for an extended period of time. If these nanoparticles are loaded with a drug, their longer attachment to the cornea will ensure more medicine penetrates the eye and improves drop treatment.

An Oct. 6, 2014 University of Reading press release, which originated the news item, provides more information about the hoped for impact of this work while providing few details about the research (Note: A link has been removed),

The research could also pave the way for new treatments of currently incurable eye-disorders such as Age-related Macular Degeneration (AMD) – the leading cause of visual impairment with around 500,000 sufferers in the UK.

There is currently no cure for this condition but experts believe the progression of AMD could be slowed considerably using injections of medicines into the eye. However, eye-drops with drug-loaded nanoparticles could be a potentially more effective and desirable course of treatment.

Professor Vitaliy Khutoryanskiy, from the University of Reading’s School of Pharmacy, said: “Treating eye disorders is a challenging task. Our corneas allow us to see and serve as a barrier that protects our eyes from microbial and chemical intervention. Unfortunately this barrier hinders the effectiveness of eye drops. Many medicines administered to the eye are inefficient as they often cannot penetrate the cornea barrier. Only the very small molecules in eye drops can penetrate healthy cornea.

“Many recent breakthroughs to treat eye conditions involve the use of drugs incorporated into nano-containers; their role being to promote drug penetration into the eye.  However the factors affecting this penetration remain poorly understood. Our research also showed that penetration of small drug molecules could be improved by adding enhancers such as cyclodextrins. This means eye drops have the potential to be a more effective, and a more comfortable, future treatment for disorders such as AMD.”

The finding is one of a number of important discoveries highlighted in a paper published today in the journal Molecular Pharmaceutics. The researchers revealed fascinating insights into how the structure of the cornea prevents various small and large molecules, as well as nanoparticles, from entering into the eye. They also examined the effects any damage to the eye would have in allowing these materials to enter the body.

Professor Khutoryanskiy continued: “There is increasing concern about the safety of environmental contaminants, pollutants and nanoparticles and their potential impacts on human health. We tested nanoparticles whose sizes ranged between 21 – 69 nm, similar to the size of viruses such as polio, or similar to airborn particles originating from building industry and found that they could not penetrate healthy and intact cornea irrespective of their chemical nature.

“However if the top layer of the cornea is damaged, either after surgical operation or accidentally, then the eye’s natural defence may be compromised and it becomes susceptible to viral attack which could result in eye infections.

“The results show that our eyes are well-equipped to defend us against potential airborne threats that exist in a fast-developing industrialised world. However we need to be aware of the potential complications that may arise if the cornea is damaged, and not treated quickly and effectively.”

Here’s a link to and a citation for the paper,

On the Barrier Properties of the Cornea: A Microscopy Study of the Penetration of Fluorescently Labeled Nanoparticles, Polymers, and Sodium Fluorescein by Ellina A. Mun, Peter W. J. Morrison, Adrian C. Williams, and Vitaliy V. Khutoryanskiy. Mol. Pharmaceutics, 2014, 11 (10), pp 3556–3564 DOI: 10.1021/mp500332m Publication Date (Web): August 28, 2014

Copyright © 2014 American Chemical Society

There’s a little more information to be had in the paper’s abstract, which is, as these things go, is relatively accessible,

[downloaded from http://pubs.acs.org/doi/abs/10.1021/mp500332m]

[downloaded from http://pubs.acs.org/doi/abs/10.1021/mp500332m]

Overcoming the natural defensive barrier functions of the eye remains one of the greatest challenges of ocular drug delivery. Cornea is a chemical and mechanical barrier preventing the passage of any foreign bodies including drugs into the eye, but the factors limiting penetration of permeants and nanoparticulate drug delivery systems through the cornea are still not fully understood. In this study, we investigate these barrier properties of the cornea using thiolated and PEGylated (750 and 5000 Da) nanoparticles, sodium fluorescein, and two linear polymers (dextran and polyethylene glycol). Experiments used intact bovine cornea in addition to bovine cornea de-epithelialized or tissues pretreated with cyclodextrin. It was shown that corneal epithelium is the major barrier for permeation; pretreatment of the cornea with β-cyclodextrin provides higher permeation of low molecular weight compounds, such as sodium fluorescein, but does not enhance penetration of nanoparticles and larger molecules. Studying penetration of thiolated and PEGylated (750 and 5000 Da) nanoparticles into the de-epithelialized ocular tissue revealed that interactions between corneal surface and thiol groups of nanoparticles were more significant determinants of penetration than particle size (for the sizes used here). PEGylation with polyethylene glycol of a higher molecular weight (5000 Da) allows penetration of nanoparticles into the stroma, which proceeds gradually, after an initial 1 h lag phase.

The paper is behind a paywall. No mention is made in the abstract or in the press release as to how the bovine (ox, cow, or buffalo) eyes were obtained but I gather these body parts are often harvested from animals that have been previously slaughtered for food.

This next item also concerns research about eye drops but this time the work comes from the University of Waterloo (Ontario, Canada). From an Oct. 8, 2014 news item on Azonano,

For the millions of sufferers of dry eye syndrome, their only recourse to easing the painful condition is to use drug-laced eye drops three times a day. Now, researchers from the University of Waterloo have developed a topical solution containing nanoparticles that will combat dry eye syndrome with only one application a week.

An Oct. 8, 2014 University of Waterloo news release (also on EurekAlert), which originated the news item, describes the results of the work without providing much detail about the nanoparticles used to deliver the treatment via eye drops,

The eye drops progressively deliver the right amount of drug-infused nanoparticles to the surface of the eyeball over a period of five days before the body absorbs them.  One weekly dose replaces 15 or more to treat the pain and irritation of dry eyes.

The nanoparticles, about 1/1000th the width of a human hair, stick harmlessly to the eye’s surface and use only five per cent of the drug normally required.

“You can’t tell the difference between these nanoparticle eye drops and water,” said Shengyan (Sandy) Liu, a PhD candidate at Waterloo’s Faculty of Engineering, who led the team of researchers from the Department of Chemical Engineering and the Centre for Contact Lens Research. “There’s no irritation to the eye.”

Dry eye syndrome is a more common ailment for people over the age of 50 and may eventually lead to eye damage. More than six per cent of people in the U.S. have it. Currently, patients must frequently apply the medicine three times a day because of the eye’s ability to self-cleanse—a process that washes away 95 per cent of the drug.

“I knew that if we focused on infusing biocompatible nanoparticles with Cyclosporine A, the drug in the eye drops, and make them stick to the eyeball without irritation for longer periods of time, it would also save patients time and reduce the possibility of toxic exposure due to excessive use of eye drops,” said Liu.

The research team is now focusing on preparing the nanoparticle eye drops for clinical trials with the hope that this nanoparticle therapy could reach the shelves of drugstores within five years.

Here’s a link to and a citation for the paper,

Phenylboronic acid modified mucoadhesive nanoparticle drug carriers facilitate weekly treatment of experimentallyinduced dry eye syndrome by Shengyan Liu, Chu Ning Chang, Mohit S. Verma, Denise Hileeto, Alex Muntz, Ulrike Stahl, Jill Woods, Lyndon W. Jones, and Frank X. Gu. Nano Research (October 2014) DOI: 10.1007/s12274-014-0547-3

This paper is behind a paywall. There is a partial preview available for free. As per the paper’s abstract, research was performed on healthy rabbit eyes.

The last ‘sight’ item I’m featuring here comes from the Massachusetts Institute of Technology (MIT) and does not appear to have been occasioned by the publication of a research paper or some other event. From an Oct. 7, 2014 news item on Azonano,

All through his childhood, Ramesh Raskar wished fervently for eyes in the back of his head. “I had the notion that the world did not exist if I wasn’t looking at it, so I would constantly turn around to see if it was there behind me.” Although this head-spinning habit faded during his teen years, Raskar never lost the desire to possess the widest possible field of vision.

Today, as director of the Camera Culture research group and associate professor of Media Arts and Sciences at the MIT Media Lab, Raskar is realizing his childhood fantasy, and then some. His inventions include a nanocamera that operates at the speed of light and do-it-yourself tools for medical imaging. His scientific mission? “I want to create not just a new kind of vision, but superhuman vision,” Raskar says.

An Oct. 6, 2014 MIT news release, which originated the news item, provides more information about Raskar and his research,

He avoids research projects launched with a goal in mind, “because then you only come up with the same solutions as everyone else.” Discoveries tend to cascade from one area into another. For instance, Raskar’s novel computational methods for reducing motion blur in photography suggested new techniques for analyzing how light propagates. “We do matchmaking; what we do here can be used over there,” says Raskar.

Inspired by the famous microflash photograph of a bullet piercing an apple, created in 1964 by MIT professor and inventor Harold “Doc” Edgerton, Raskar realized, “I can do Edgerton millions of times faster.” This led to one of the Camera Culture group’s breakthrough inventions, femtophotography, a process for recording light in flight.

Manipulating photons into a packet resembling Edgerton’s bullet, Raskar and his team were able to “shoot” ultrashort laser pulses through a Coke bottle. Using a special camera to capture the action of these pulses at half a trillion frames per second with two-trillionths of a second exposure times, they captured moving images of light, complete with wave-like shadows lapping at the exterior of the bottle.

Femtophotography opened up additional avenues of inquiry, as Raskar pondered what other features of the world superfast imaging processes might reveal. He was particularly intrigued by scattered light, the kind in evidence when fog creates the visual equivalent of “noise.”

In one experiment, Raskar’s team concealed an object behind a wall, out of camera view. By firing super-short laser bursts onto a surface nearby, and taking millions of exposures of light bouncing like a pinball around the scene, the group rendered a picture of the hidden object. They had effectively created a camera that peers around corners, an invention that might someday help emergency responders safely investigate a dangerous environment.

Raskar’s objective of “making the invisible visible” extends as well to the human body. The Camera Culture group has developed a technique for taking pictures of the eye using cellphone attachments, spawning inexpensive, patient-managed vision and disease diagnostics. Conventional photography has evolved from time-consuming film development to instantaneous digital snaps, and Raskar believes “the same thing will happen to medical imaging.” His research group intends “to break all the rules and be at the forefront. I think we’ll get there in the next few years,” he says.

Ultimately, Raskar predicts, imaging will serve as a catalyst of transformation in all dimensions of human life — change that can’t come soon enough for him. “I hate ordinary cameras,” he says. “They record only what I see. I want a camera that gives me a superhuman perspective.”

Following the link to the MIT news release will lead you to more information about Raskar and his work. You can also see and hear Raskar talk about his femtophotography in a 2012 TEDGlobal talk here.

Nanojuice in your gut

A July 7, 2014 news item on Azonano features a new technique that could help doctors better diagnose problems in the intestines (guts),

Located deep in the human gut, the small intestine is not easy to examine. X-rays, MRIs and ultrasound images provide snapshots but each suffers limitations. Help is on the way.

University at Buffalo [State University of New York] researchers are developing a new imaging technique involving nanoparticles suspended in liquid to form “nanojuice” that patients would drink. Upon reaching the small intestine, doctors would strike the nanoparticles with a harmless laser light, providing an unparalleled, non-invasive, real-time view of the organ.

A July 5, 2014 University of Buffalo news release (also on EurekAlert) by Cory Nealon, which originated the news item, describes some of the challenges associated with medical imaging of small intestines,

“Conventional imaging methods show the organ and blockages, but this method allows you to see how the small intestine operates in real time,” said corresponding author Jonathan Lovell, PhD, UB assistant professor of biomedical engineering. “Better imaging will improve our understanding of these diseases and allow doctors to more effectively care for people suffering from them.”

The average human small intestine is roughly 23 feet long and 1 inch thick. Sandwiched between the stomach and large intestine, it is where much of the digestion and absorption of food takes place. It is also where symptoms of irritable bowel syndrome, celiac disease, Crohn’s disease and other gastrointestinal illnesses occur.

To assess the organ, doctors typically require patients to drink a thick, chalky liquid called barium. Doctors then use X-rays, magnetic resonance imaging and ultrasounds to assess the organ, but these techniques are limited with respect to safety, accessibility and lack of adequate contrast, respectively.

Also, none are highly effective at providing real-time imaging of movement such as peristalsis, which is the contraction of muscles that propels food through the small intestine. Dysfunction of these movements may be linked to the previously mentioned illnesses, as well as side effects of thyroid disorders, diabetes and Parkinson’s disease.

The news release goes on to describe how the researchers manipulated dyes that are usually unsuitable for the purpose of imaging an organ in the body,

Lovell and a team of researchers worked with a family of dyes called naphthalcyanines. These small molecules absorb large portions of light in the near-infrared spectrum, which is the ideal range for biological contrast agents.

They are unsuitable for the human body, however, because they don’t disperse in liquid and they can be absorbed from the intestine into the blood stream.

To address these problems, the researchers formed nanoparticles called “nanonaps” that contain the colorful dye molecules and added the abilities to disperse in liquid and move safely through the intestine.

In laboratory experiments performed with mice, the researchers administered the nanojuice orally. They then used photoacoustic tomography (PAT), which is pulsed laser lights that generate pressure waves that, when measured, provide a real-time and more nuanced view of the small intestine.

The researchers plan to continue to refine the technique for human trials, and move into other areas of the gastrointestinal tract.

Here’s an image of the nanojuice in the guts of a mouse,

The combination of "nanojuice" and photoacoustic tomography illuminates the intestine of a mouse. (Credit: Jonathan Lovell)

The combination of “nanojuice” and photoacoustic tomography illuminates the intestine of a mouse. (Credit: Jonathan Lovell)

This is an international collaboration both from a research perspective and a funding perspective (from the news release),

Additional authors of the study come from UB’s Department of Chemical and Biological Engineering, Pohang University of Science and Technology in Korea, Roswell Park Cancer Institute in Buffalo, the University of Wisconsin-Madison, and McMaster University in Canada.

The research was supported by grants from the National Institutes of Health, the Department of Defense and the Korean Ministry of Science, ICT and Future Planning.

Here’s a link to and a citation for the paper,

Non-invasive multimodal functional imaging of the intestine with frozen micellar naphthalocyanines by Yumiao Zhang, Mansik Jeon, Laurie J. Rich, Hao Hong, Jumin Geng, Yin Zhang, Sixiang Shi, Todd E. Barnhart, Paschalis Alexandridis, Jan D. Huizinga, Mukund Seshadri, Weibo Cai, Chulhong Kim, & Jonathan F. Lovell. Nature Nanotechnology (2014) doi:10.1038/nnano.2014.130 Published online 06 July 2014

This paper is behind a paywall.