Tag Archives: US Food and Drug Administration

Artificial intelligence (AI) brings together International Telecommunications Union (ITU) and World Health Organization (WHO) and AI outperforms animal testing

Following on my May 11, 2018 posting about the International Telecommunications Union (ITU) and the 2018 AI for Good Global Summit in mid- May, there’s an announcement. My other bit of AI news concerns animal testing.

Leveraging the power of AI for health

A July 24, 2018 ITU press release (a shorter version was received via email) announces a joint initiative focused on improving health,

Two United Nations specialized agencies are joining forces to expand the use of artificial intelligence (AI) in the health sector to a global scale, and to leverage the power of AI to advance health for all worldwide. The International Telecommunication Union (ITU) and the World Health Organization (WHO) will work together through the newly established ITU Focus Group on AI for Health to develop an international “AI for health” standards framework and to identify use cases of AI in the health sector that can be scaled-up for global impact. The group is open to all interested parties.

“AI could help patients to assess their symptoms, enable medical professionals in underserved areas to focus on critical cases, and save great numbers of lives in emergencies by delivering medical diagnoses to hospitals before patients arrive to be treated,” said ITU Secretary-General Houlin Zhao. “ITU and WHO plan to ensure that such capabilities are available worldwide for the benefit of everyone, everywhere.”

The demand for such a platform was first identified by participants of the second AI for Good Global Summit held in Geneva, 15-17 May 2018. During the summit, AI and the health sector were recognized as a very promising combination, and it was announced that AI-powered technologies such as skin disease recognition and diagnostic applications based on symptom questions could be deployed on six billion smartphones by 2021.

The ITU Focus Group on AI for Health is coordinated through ITU’s Telecommunications Standardization Sector – which works with ITU’s 193 Member States and more than 800 industry and academic members to establish global standards for emerging ICT innovations. It will lead an intensive two-year analysis of international standardization opportunities towards delivery of a benchmarking framework of international standards and recommendations by ITU and WHO for the use of AI in the health sector.

“I believe the subject of AI for health is both important and useful for advancing health for all,” said WHO Director-General Tedros Adhanom Ghebreyesus.

The ITU Focus Group on AI for Health will also engage researchers, engineers, practitioners, entrepreneurs and policy makers to develop guidance documents for national administrations, to steer the creation of policies that ensure the safe, appropriate use of AI in the health sector.

“1.3 billion people have a mobile phone and we can use this technology to provide AI-powered health data analytics to people with limited or no access to medical care. AI can enhance health by improving medical diagnostics and associated health intervention decisions on a global scale,” said Thomas Wiegand, ITU Focus Group on AI for Health Chairman, and Executive Director of the Fraunhofer Heinrich Hertz Institute, as well as professor at TU Berlin.

He added, “The health sector is in many countries among the largest economic sectors or one of the fastest-growing, signalling a particularly timely need for international standardization of the convergence of AI and health.”

Data analytics are certain to form a large part of the ITU focus group’s work. AI systems are proving increasingly adept at interpreting laboratory results and medical imagery and extracting diagnostically relevant information from text or complex sensor streams.

As part of this, the ITU Focus Group for AI for Health will also produce an assessment framework to standardize the evaluation and validation of AI algorithms — including the identification of structured and normalized data to train AI algorithms. It will develop open benchmarks with the aim of these becoming international standards.

The ITU Focus Group for AI for Health will report to the ITU standardization expert group for multimedia, Study Group 16.

I got curious about Study Group 16 (from the Study Group 16 at a glance webpage),

Study Group 16 leads ITU’s standardization work on multimedia coding, systems and applications, including the coordination of related studies across the various ITU-T SGs. It is also the lead study group on ubiquitous and Internet of Things (IoT) applications; telecommunication/ICT accessibility for persons with disabilities; intelligent transport system (ITS) communications; e-health; and Internet Protocol television (IPTV).

Multimedia is at the core of the most recent advances in information and communication technologies (ICTs) – especially when we consider that most innovation today is agnostic of the transport and network layers, focusing rather on the higher OSI model layers.

SG16 is active in all aspects of multimedia standardization, including terminals, architecture, protocols, security, mobility, interworking and quality of service (QoS). It focuses its studies on telepresence and conferencing systems; IPTV; digital signage; speech, audio and visual coding; network signal processing; PSTN modems and interfaces; facsimile terminals; and ICT accessibility.

I wonder which group deals with artificial intelligence and, possibly, robots.

Chemical testing without animals

Thomas Hartung, professor of environmental health and engineering at Johns Hopkins University (US), describes in his July 25, 2018 essay (written for The Conversation) on phys.org the situation where chemical testing is concerned,

Most consumers would be dismayed with how little we know about the majority of chemicals. Only 3 percent of industrial chemicals – mostly drugs and pesticides – are comprehensively tested. Most of the 80,000 to 140,000 chemicals in consumer products have not been tested at all or just examined superficially to see what harm they may do locally, at the site of contact and at extremely high doses.

I am a physician and former head of the European Center for the Validation of Alternative Methods of the European Commission (2002-2008), and I am dedicated to finding faster, cheaper and more accurate methods of testing the safety of chemicals. To that end, I now lead a new program at Johns Hopkins University to revamp the safety sciences.

As part of this effort, we have now developed a computer method of testing chemicals that could save more than a US$1 billion annually and more than 2 million animals. Especially in times where the government is rolling back regulations on the chemical industry, new methods to identify dangerous substances are critical for human and environmental health.

Having written on the topic of alternatives to animal testing on a number of occasions (my December 26, 2014 posting provides an overview of sorts), I was particularly interested to see this in Hartung’s July 25, 2018 essay on The Conversation (Note: Links have been removed),

Following the vision of Toxicology for the 21st Century, a movement led by U.S. agencies to revamp safety testing, important work was carried out by my Ph.D. student Tom Luechtefeld at the Johns Hopkins Center for Alternatives to Animal Testing. Teaming up with Underwriters Laboratories, we have now leveraged an expanded database and machine learning to predict toxic properties. As we report in the journal Toxicological Sciences, we developed a novel algorithm and database for analyzing chemicals and determining their toxicity – what we call read-across structure activity relationship, RASAR.

This graphic reveals a small part of the chemical universe. Each dot represents a different chemical. Chemicals that are close together have similar structures and often properties. Thomas Hartung, CC BY-SA

To do this, we first created an enormous database with 10 million chemical structures by adding more public databases filled with chemical data, which, if you crunch the numbers, represent 50 trillion pairs of chemicals. A supercomputer then created a map of the chemical universe, in which chemicals are positioned close together if they share many structures in common and far where they don’t. Most of the time, any molecule close to a toxic molecule is also dangerous. Even more likely if many toxic substances are close, harmless substances are far. Any substance can now be analyzed by placing it into this map.

If this sounds simple, it’s not. It requires half a billion mathematical calculations per chemical to see where it fits. The chemical neighborhood focuses on 74 characteristics which are used to predict the properties of a substance. Using the properties of the neighboring chemicals, we can predict whether an untested chemical is hazardous. For example, for predicting whether a chemical will cause eye irritation, our computer program not only uses information from similar chemicals, which were tested on rabbit eyes, but also information for skin irritation. This is because what typically irritates the skin also harms the eye.

How well does the computer identify toxic chemicals?

This method will be used for new untested substances. However, if you do this for chemicals for which you actually have data, and compare prediction with reality, you can test how well this prediction works. We did this for 48,000 chemicals that were well characterized for at least one aspect of toxicity, and we found the toxic substances in 89 percent of cases.

This is clearly more accurate that the corresponding animal tests which only yield the correct answer 70 percent of the time. The RASAR shall now be formally validated by an interagency committee of 16 U.S. agencies, including the EPA [Environmental Protection Agency] and FDA [Food and Drug Administration], that will challenge our computer program with chemicals for which the outcome is unknown. This is a prerequisite for acceptance and use in many countries and industries.

The potential is enormous: The RASAR approach is in essence based on chemical data that was registered for the 2010 and 2013 REACH [Registration, Evaluation, Authorizations and Restriction of Chemicals] deadlines [in Europe]. If our estimates are correct and chemical producers would have not registered chemicals after 2013, and instead used our RASAR program, we would have saved 2.8 million animals and $490 million in testing costs – and received more reliable data. We have to admit that this is a very theoretical calculation, but it shows how valuable this approach could be for other regulatory programs and safety assessments.

In the future, a chemist could check RASAR before even synthesizing their next chemical to check whether the new structure will have problems. Or a product developer can pick alternatives to toxic substances to use in their products. This is a powerful technology, which is only starting to show all its potential.

It’s been my experience that these claims having led a movement (Toxicology for the 21st Century) are often contested with many others competing for the title of ‘leader’ or ‘first’. That said, this RASAR approach seems very exciting, especially in light of the skepticism about limiting and/or making animal testing unnecessary noted in my December 26, 2014 posting.it was from someone I thought knew better.

Here’s a link to and a citation for the paper mentioned in Hartung’s essay,

Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility by Thomas Luechtefeld, Dan Marsh, Craig Rowlands, Thomas Hartung. Toxicological Sciences, kfy152, https://doi.org/10.1093/toxsci/kfy152 Published: 11 July 2018

This paper is open access.

Xenotransplantation—organs for transplantation in human patients—it’s a business and a science

The last time (June 18, 2018 post) I mentioned xenotransplantation (transplanting organs from one species into another species; see more here), it was in the context of an art/sci (or sciart) event coming to Vancouver (Canada).,

Patricia Piccinini’s Curious Imaginings Courtesy: Vancouver Biennale [downloaded from http://dailyhive.com/vancouver/vancouver-biennale-unsual-public-art-2018/]

The latest edition of the Vancouver Biennale was featured in a June 6, 2018 news item on the Daily Hive (Vancouver),

Melbourne artist Patricia Piccinini’s Curious Imaginings is expected to be one of the most talked about installations of the exhibit. Her style of “oddly captivating, somewhat grotesque, human-animal hybrid creature” is meant to be shocking and thought-provoking.

Piccinini’s interactive [emphasis mine] experience will “challenge us to explore the social impacts of emerging biotechnology and our ethical limits in an age where genetic engineering and digital technologies are already pushing the boundaries of humanity.”

Piccinini’s work will be displayed in the 105-year-old Patricia Hotel in Vancouver’s Strathcona neighbourhood. The 90-day ticketed exhibition [emphasis mine] is scheduled to open this September [2018].

(The show opens on Sept. 14, 2018.)

At the time, I had yet to stumble across Ingfei Chen’s thoughtful dive into the topic in her May 9, 2018 article for Slate.com,

In the United States, the clock is ticking for more than 114,700 adults and children waiting for a donated kidney or other lifesaving organ, and each day, nearly 20 of them die. Researchers are devising a new way to grow human organs inside other animals, but the method raises potentially thorny ethical issues. Other conceivable futuristic techniques sound like dystopian science fiction. As we envision an era of regenerative medicine decades from now, how far is society willing to go to solve the organ shortage crisis?

I found myself pondering this question after a discussion about the promises of stem cell technologies veered from the intriguing into the bizarre. I was interviewing bioengineer Zev Gartner, co-director and research coordinator of the Center for Cellular Construction at the University of California, San Francisco, about so-called organoids, tiny clumps of organlike tissue that can self-assemble from human stem cells in a Petri dish. These tissue bits are lending new insights into how our organs form and diseases take root. Some researchers even hope they can nurture organoids into full-size human kidneys, pancreases, and other organs for transplantation.

Certain organoid experiments have recently set off alarm bells, but when I asked Gartner about it, his radar for moral concerns was focused elsewhere. For him, the “really, really thought-provoking” scenarios involve other emerging stem cell–based techniques for engineering replacement organs for people, he told me. “Like blastocyst complementation,” he said.

Never heard of it? Neither had I. Turns out it’s a powerful new genetic engineering trick that researchers hope to use for growing human organs inside pigs or sheep—organs that could be genetically personalized for transplant patients, in theory avoiding immune-system rejection problems. The science still has many years to go, but if it pans out, it could be one solution to the organ shortage crisis. However, the prospect of creating hybrid animals with human parts and killing them to harvest organs has already raised a slew of ethical questions. In 2015, the National Institutes of Health placed a moratorium on federal funding of this nascent research area while it evaluated and discussed the issues.

As Gartner sees it, the debate over blastocyst complementation research—work that he finds promising—is just one of many conversations that society needs to have about the ethical and social costs and benefits of future technologies for making lifesaving transplant organs. “There’s all these weird ways that we could go about doing this,” he said, with a spectrum of imaginable approaches that includes organoids, interspecies organ farming, and building organs from scratch using 3D bioprinters. But even if it turns out we can produce human organs in these novel ways, the bigger issue, in each technological instance, may be whether we should.

Gartner crystallized things with a downright creepy example: “We know that the best bioreactor for tissues and organs for humans are human beings,” he said. Hypothetically, “the best way to get you a new heart would be to clone you, grow up a copy of yourself, and take the heart out.” [emphasis mine] Scientists could probably produce a cloned person with the technologies we already have, if money and ethics were of no concern. “But we don’t want to go there, right?” he added in the next breath. “The ethics involved in doing it are not compatible with who we want to be as a society.”

This sounds like Gartner may have been reading some science fiction, specifically, Lois McMaster Bujold and her Barrayar series where she often explored the ethics and possibilities of bioengineering. At this point, some of her work seems eerily prescient.

As for Chen’s article, I strongly encourage you to read it in its entirety if you have the time.

Medicine, healing, and big money

At about the same time, there was a May 31, 2018 news item on phys.org offering a perspective from some of the leaders in the science and the business (Note: Links have been removed),

Over the past few years, researchers led by George Church have made important strides toward engineering the genomes of pigs to make their cells compatible with the human body. So many think that it’s possible that, with the help of CRISPR technology, a healthy heart for a patient in desperate need might one day come from a pig.

“It’s relatively feasible to change one gene in a pig, but to change many dozens—which is quite clear is the minimum here—benefits from CRISPR,” an acronym for clustered regularly interspaced short palindromic repeats, said Church, the Robert Winthrop Professor of Genetics at Harvard Medical School (HMS) and a core faculty member of Harvard’s Wyss Institute for Biologically Inspired Engineering. Xenotransplantation is “one of few” big challenges (along with gene drives and de-extinction, he said) “that really requires the ‘oomph’ of CRISPR.”

To facilitate the development of safe and effective cells, tissues, and organs for future medical transplantation into human patients, Harvard’s Office of Technology Development has granted a technology license to the Cambridge biotech startup eGenesis.

Co-founded by Church and former HMS doctoral student Luhan Yang in 2015, eGenesis announced last year that it had raised $38 million to advance its research and development work. At least eight former members of the Church lab—interns, doctoral students, postdocs, and visiting researchers—have continued their scientific careers as employees there.

“The Church Lab is well known for its relentless pursuit of scientific achievements so ambitious they seem improbable—and, indeed, [for] its track record of success,” said Isaac Kohlberg, Harvard’s chief technology development officer and senior associate provost. “George deserves recognition too for his ability to inspire passion and cultivate a strong entrepreneurial drive among his talented research team.”

The license from Harvard OTD covers a powerful set of genome-engineering technologies developed at HMS and the Wyss Institute, including access to foundational intellectual property relating to the Church Lab’s 2012 breakthrough use of CRISPR, led by Yang and Prashant Mali, to edit the genome of human cells. Subsequent innovations that enabled efficient and accurate editing of numerous genes simultaneously are also included. The license is exclusive to eGenesis but limited to the field of xenotransplantation.

A May 30, 2018 Harvard University news release by Caroline Petty, which originated the news item, explores some of the issues associated with incubating humans organs in other species,

The prospect of using living, nonhuman organs, and concerns over the infectiousness of pathogens either present in the tissues or possibly formed in combination with human genetic material, have prompted the Food and Drug Administration to issue detailed guidance on xenotransplantation research and development since the mid-1990s. In pigs, a primary concern has been that porcine endogenous retroviruses (PERVs), strands of potentially pathogenic DNA in the animals’ genomes, might infect human patients and eventually cause disease. [emphases mine]

That’s where the Church lab’s CRISPR expertise has enabled significant advances. In 2015, the lab published important results in the journal Science, successfully demonstrating the use of genome engineering to eliminate all 62 PERVs in porcine cells. Science later called it “the most widespread CRISPR editing feat to date.”

In 2017, with collaborators at Harvard, other universities, and eGenesis, Church and Yang went further. Publishing again in Science, they first confirmed earlier researchers’ fears: Porcine cells can, in fact, transmit PERVs into human cells, and those human cells can pass them on to other, unexposed human cells. (It is still unknown under what circumstances those PERVs might cause disease.) In the same paper, they corrected the problem, announcing the embryogenesis and birth of 37 PERV-free pigs. [Note: My July 17, 2018 post features research which suggests CRISPR-Cas9 gene editing may cause greater genetic damage than had been thought.]

“Taken together, those innovations were stunning,” said Vivian Berlin, director of business development in OTD, who manages the commercialization strategy for much of Harvard’s intellectual property in the life sciences. “That was the foundation they needed, to convince both the scientific community and the investment community that xenotransplantation might become a reality.”

“After hundreds of tests, this was a critical milestone for eGenesis — and the entire field — and represented a key step toward safe organ transplantation from pigs,” said Julie Sunderland, interim CEO of eGenesis. “Building on this study, we hope to continue to advance the science and potential of making xenotransplantation a safe and routine medical procedure.”

Genetic engineering may undercut human diseases, but also could help restore extinct species, researcher says. [Shades of the Jurassic Park movies!]

It’s not, however, the end of the story: An immunological challenge remains, which eGenesis will need to address. The potential for a patient’s body to outright reject transplanted tissue has stymied many previous attempts at xenotransplantation. Church said numerous genetic changes must be achieved to make porcine organs fully compatible with human patients. Among these are edits to several immune functions, coagulation functions, complements, and sugars, as well as the PERVs.

“Trying the straight transplant failed almost immediately, within hours, because there’s a huge mismatch in the carbohydrates on the surface of the cells, in particular alpha-1-3-galactose, and so that was a showstopper,” Church explained. “When you delete that gene, which you can do with conventional methods, you still get pretty fast rejection, because there are a lot of other aspects that are incompatible. You have to take care of each of them, and not all of them are just about removing things — some of them you have to humanize. There’s a great deal of subtlety involved so that you get normal pig embryogenesis but not rejection.

“Putting it all together into one package is challenging,” he concluded.

In short, it’s the next big challenge for CRISPR.

Not unexpectedly, there is no mention of the CRISPR patent fight between Harvard/MIT’s (Massachusetts Institute of Technology) Broad Institute and the University of California at Berkeley (UC Berkeley). My March 15, 2017 posting featured an outcome where the Broad Institute won the first round of the fight. As I recall, it was a decision based on the principles associated with King Solomon, i.e., the US Patent Office, divided the baby and UCBerkeley got the less important part of the baby. As you might expect the decision has been appealed. In an April 30, 2018 piece, Scientific American reprinted an article about the latest round in the fight written by Sharon Begley for STAT (Note: Links have been removed),

All You Need to Know for Round 2 of the CRISPR Patent Fight

It’s baaaaack, that reputation-shredding, stock-moving fight to the death over key CRISPR patents. On Monday morning in Washington, D.C., the U.S. Court of Appeals for the Federal Circuit will hear oral arguments in University of California v. Broad Institute. Questions?

How did we get here? The patent office ruled in February 2017 that the Broad’s 2014 CRISPR patent on using CRISPR-Cas9 to edit genomes, based on discoveries by Feng Zhang, did not “interfere” with a patent application by UC based on the work of UC Berkeley’s Jennifer Doudna. In plain English, that meant the Broad’s patent, on using CRISPR-Cas9 to edit genomes in eukaryotic cells (all animals and plants, but not bacteria), was different from UC’s, which described Doudna’s experiments using CRISPR-Cas9 to edit DNA in a test tube—and it was therefore valid. The Patent Trial and Appeal Board concluded that when Zhang got CRISPR-Cas9 to work in human and mouse cells in 2012, it was not an obvious extension of Doudna’s earlier research, and that he had no “reasonable expectation of success.” UC appealed, and here we are.

For anyone who may not realize what the stakes are for these institutions, Linda Williams in a March 16, 1999 article for the LA Times had this to say about universities, patents, and money,

The University of Florida made about $2 million last year in royalties on a patent for Gatorade Thirst Quencher, a sports drink that generates some $500 million to $600 million a year in revenue for Quaker Oats Co.

The payments place the university among the top five in the nation in income from patent royalties.

Oh, but if some people on the Gainesville, Fla., campus could just turn back the clock. “If we had done Gatorade right, we would be getting $5 or $6 million (a year),” laments Donald Price, director of the university’s office of corporate programs. “It is a classic example of how not to handle a patent idea,” he added.

Gatorade was developed in 1965 when many universities were ill equipped to judge the commercial potential of ideas emerging from their research labs. Officials blew the university’s chance to control the Gatorade royalties when they declined to develop a professor’s idea.

The Gatorade story does not stop there and, even though it’s almost 20 years old, this article stands the test of time. I strongly encourage you to read it if the business end of patents and academia interest you or if you would like to develop more insight into the Broad Institute/UC Berkeley situation.

Getting back to the science, there is that pesky matter of diseases crossing over from one species to another. While, Harvard and eGenesis claim a victory in this area, it seems more work needs to be done.

Infections from pigs

An August 29, 2018 University of Alabama at Birmingham news release (also on EurekAlert) by Jeff Hansen, describes the latest chapter in the quest to provide more organs for transplantion,

A shortage of organs for transplantation — including kidneys and hearts — means that many patients die while still on waiting lists. So, research at the University of Alabama at Birmingham and other sites has turned to pig organs as an alternative. [emphasis mine]

Using gene-editing, researchers have modified such organs to prevent rejection, and research with primates shows the modified pig organs are well-tolerated.

An added step is needed to ensure the safety of these inter-species transplants — sensitive, quantitative assays for viruses and other infectious microorganisms in donor pigs that potentially could gain access to humans during transplantation.

The U.S. Food and Drug Administration requires such testing, prior to implantation, of tissues used for xenotransplantation from animals to humans. It is possible — though very unlikely — that an infectious agent in transplanted tissues could become an emerging infectious disease in humans.

In a paper published in Xenotransplantation, Mark Prichard, Ph.D., and colleagues at UAB have described the development and testing of 30 quantitative assays for pig infectious agents. These assays had sensitivities similar to clinical lab assays for viral loads in human patients. After validation, the UAB team also used the assays on nine sows and 22 piglets delivered from the sows through caesarian section.

“Going forward, ensuring the safety of these organs is of paramount importance,” Prichard said. “The use of highly sensitive techniques to detect potential pathogens will help to minimize adverse events in xenotransplantation.”

“The assays hold promise as part of the screening program to identify suitable donor animals, validate and release transplantable organs for research purposes, and monitor transplant recipients,” said Prichard, a professor in the UAB Department of Pediatrics and director of the Department of Pediatrics Molecular Diagnostics Laboratory.

The UAB researchers developed quantitative polymerase chain reaction, or qPCR, assays for 28 viruses sometimes found in pigs and two groups of mycoplasmas. They established reproducibility, sensitivity, specificity and lower limit of detection for each assay. All but three showed features of good quantitative assays, and the lower limit of detection values ranged between one and 16 copies of the viral or bacterial genetic material.

Also, the pig virus assays did not give false positives for some closely related human viruses.

As a start to understanding the infectious disease load in normal healthy animals and ensuring the safety of pig tissues used in xenotransplantation research, the researchers then screened blood, nasal swab and stool specimens from nine adult sows and 22 of their piglets delivered by caesarian section.

Mycoplasma species and two distinct herpesviruses were the most commonly detected microorganisms. Yet 14 piglets that were delivered from three sows infected with either or both herpesviruses were not infected with the herpesviruses, showing that transmission of these viruses from sow to the caesarian-delivery piglet was inefficient.

Prichard says the assays promise to enhance the safety of pig tissues for xenotransplantation, and they will also aid evaluation of human specimens after xenotransplantation.

The UAB researchers say they subsequently have evaluated more than 300 additional specimens, and that resulted in the detection of most of the targets. “The detection of these targets in pig specimens provides reassurance that the analytical methods are functioning as designed,” said Prichard, “and there is no a priori reason some targets might be more difficult to detect than others with the methods described here.”

As is my custom, here’s a link to and a citation for the paper,

Xenotransplantation panel for the detection of infectious agents in pigs by Caroll B. Hartline, Ra’Shun L. Conner, Scott H. James, Jennifer Potter, Edward Gray, Jose Estrada, Mathew Tector, A. Joseph Tector, Mark N. Prichard. Xenotransplantaion Volume 25, Issue 4 July/August 2018 e12427 DOI: https://doi.org/10.1111/xen.12427 First published: 18 August 2018

This paper is open access.

All this leads to questions about chimeras. If a pig is incubating organs with human cells it’s a chimera but then means the human receiving the organ becomes a chimera too. (For an example, see my Dec. 22, 2013 posting where there’s mention of a woman who received a trachea from a pig. Scroll down about 30% of the way.)

What is it to be human?

A question much beloved of philosophers and others, the question seems particularly timely with xenotransplantion and other developments such neuroprosthetics (cyborgs) and neuromorphic computing (brainlike computing).

As I’ve noted before, although not recently, popular culture offers a discourse on these issues. Take a look at the superhero movies and the way in which enhanced humans and aliens are presented. For example, X-Men comics and movies present mutants (humans with enhanced abilities) as despised and rejected. Video games (not really my thing but there is the Deus Ex series which has as its hero, a cyborg also offer insight into these issues.

Other than popular culture and in the ‘bleeding edge’ arts community, I can’t recall any public discussion on these matters arising from the extraordinary set of technologies which are being deployed or prepared for deployment in the foreseeable future.

(If you’re in Vancouver (Canada) from September 14 – December 15, 2018, you may want to check out Piccinini’s work. Also, there’s ” NCSU [North Carolina State University] Libraries, NC State’s Genetic Engineering and Society (GES) Center, and the Gregg Museum of Art & Design have issued a public call for art for the upcoming exhibition Art’s Work in the Age of Biotechnology: Shaping our Genetic Futures.” from my Sept. 6, 2018 posting. Deadline: Oct. 1, 2018.)

At a guess, there will be pushback from people who have no interest in debating what it is to be human as they already know, and will find these developments, when they learn about them, to be horrifying and unnatural.

Being smart about using artificial intelligence in the field of medicine

Since my August 20, 2018 post featured an opinion piece about the possibly imminent replacement of radiologists with artificial intelligence systems and the latest research about employing them for diagnosing eye diseases, it seems like a good time to examine some of the mythology embedded in the discussion about AI and medicine.

Imperfections in medical AI systems

An August 15, 2018 article for Slate.com by W. Nicholson Price II (who teaches at the University of Michigan School of Law; in addition to his law degree he has a PhD in Biological Sciences from Columbia University) begins with the peppy, optimistic view before veering into more critical territory (Note: Links have been removed),

For millions of people suffering from diabetes, new technology enabled by artificial intelligence promises to make management much easier. Medtronic’s Guardian Connect system promises to alert users 10 to 60 minutes before they hit high or low blood sugar level thresholds, thanks to IBM Watson, “the same supercomputer technology that can predict global weather patterns.” Startup Beta Bionics goes even further: In May, it received Food and Drug Administration approval to start clinical trials on what it calls a “bionic pancreas system” powered by artificial intelligence, capable of “automatically and autonomously managing blood sugar levels 24/7.”

An artificial pancreas powered by artificial intelligence represents a huge step forward for the treatment of diabetes—but getting it right will be hard. Artificial intelligence (also known in various iterations as deep learning and machine learning) promises to automatically learn from patterns in medical data to help us do everything from managing diabetes to finding tumors in an MRI to predicting how long patients will live. But the artificial intelligence techniques involved are typically opaque. We often don’t know how the algorithm makes the eventual decision. And they may change and learn from new data—indeed, that’s a big part of the promise. But when the technology is complicated, opaque, changing, and absolutely vital to the health of a patient, how do we make sure it works as promised?

Price describes how a ‘closed loop’ artificial pancreas with AI would automate insulin levels for diabetic patients, flaws in the automated system, and how companies like to maintain a competitive advantage (Note: Links have been removed),

[…] a “closed loop” artificial pancreas, where software handles the whole issue, receiving and interpreting signals from the monitor, deciding when and how much insulin is needed, and directing the insulin pump to provide the right amount. The first closed-loop system was approved in late 2016. The system should take as much of the issue off the mind of the patient as possible (though, of course, that has limits). Running a close-loop artificial pancreas is challenging. The way people respond to changing levels of carbohydrates is complicated, as is their response to insulin; it’s hard to model accurately. Making it even more complicated, each individual’s body reacts a little differently.

Here’s where artificial intelligence comes into play. Rather than trying explicitly to figure out the exact model for how bodies react to insulin and to carbohydrates, machine learning methods, given a lot of data, can find patterns and make predictions. And existing continuous glucose monitors (and insulin pumps) are excellent at generating a lot of data. The idea is to train artificial intelligence algorithms on vast amounts of data from diabetic patients, and to use the resulting trained algorithms to run a closed-loop artificial pancreas. Even more exciting, because the system will keep measuring blood glucose, it can learn from the new data and each patient’s artificial pancreas can customize itself over time as it acquires new data from that patient’s particular reactions.

Here’s the tough question: How will we know how well the system works? Diabetes software doesn’t exactly have the best track record when it comes to accuracy. A 2015 study found that among smartphone apps for calculating insulin doses, two-thirds of the apps risked giving incorrect results, often substantially so. … And companies like to keep their algorithms proprietary for a competitive advantage, which makes it hard to know how they work and what flaws might have gone unnoticed in the development process.

There’s more,

These issues aren’t unique to diabetes care—other A.I. algorithms will also be complicated, opaque, and maybe kept secret by their developers. The potential for problems multiplies when an algorithm is learning from data from an entire hospital, or hospital system, or the collected data from an entire state or nation, not just a single patient. …

The [US Food and Drug Administraiont] FDA is working on this problem. The head of the agency has expressed his enthusiasm for bringing A.I. safely into medical practice, and the agency has a new Digital Health Innovation Action Plan to try to tackle some of these issues. But they’re not easy, and one thing making it harder is a general desire to keep the algorithmic sauce secret. The example of IBM Watson for Oncology has given the field a bit of a recent black eye—it turns out that the company knew the algorithm gave poor recommendations for cancer treatment but kept that secret for more than a year. …

While Price focuses on problems with algorithms and with developers and their business interests, he also hints at some of the body’s complexities.

Can AI systems be like people?

Susan Baxter, a medical writer with over 20 years experience, a PhD in health economics, and author of countless magazine articles and several books, offers a more person-centered approach to the discussion in her July 6, 2018 posting on susanbaxter.com,

The fascination with AI continues to irk, given that every second thing I read seems to be extolling the magic of AI and medicine and how It Will Change Everything. Which it will not, trust me. The essential issue of illness remains perennial and revolves around an individual for whom no amount of technology will solve anything without human contact. …

But in this world, or so we are told by AI proponents, radiologists will soon be obsolete. [my August 20, 2018 post] The adaptational learning capacities of AI mean that reading a scan or x-ray will soon be more ably done by machines than humans. The presupposition here is that we, the original programmers of this artificial intelligence, understand the vagaries of real life (and real disease) so wonderfully that we can deconstruct these much as we do the game of chess (where, let’s face it, Big Blue ate our lunch) and that analyzing a two-dimensional image of a three-dimensional body, already problematic, can be reduced to a series of algorithms.

Attempting to extrapolate what some “shadow” on a scan might mean in a flesh and blood human isn’t really quite the same as bishop to knight seven. Never mind the false positive/negatives that are considered an acceptable risk or the very real human misery they create.

Moravec called it

It’s called Moravec’s paradox, the inability of humans to realize just how complex basic physical tasks are – and the corresponding inability of AI to mimic it. As you walk across the room, carrying a glass of water, talking to your spouse/friend/cat/child; place the glass on the counter and open the dishwasher door with your foot as you open a jar of pickles at the same time, take a moment to consider just how many concurrent tasks you are doing and just how enormous the computational power these ostensibly simple moves would require.

Researchers in Singapore taught industrial robots to assemble an Ikea chair. Essentially, screw in the legs. A person could probably do this in a minute. Maybe two. The preprogrammed robots took nearly half an hour. And I suspect programming those robots took considerably longer than that.

Ironically, even Elon Musk, who has had major production problems with the Tesla cars rolling out of his high tech factory, has conceded (in a tweet) that “Humans are underrated.”

I wouldn’t necessarily go that far given the political shenanigans of Trump & Co. but in the grand scheme of things I tend to agree. …

Is AI going the way of gene therapy?

Susan draws a parallel between the AI and medicine discussion with the discussion about genetics and medicine (Note: Links have been removed),

On a somewhat similar note – given the extent to which genetics discourse has that same linear, mechanistic  tone [as AI and medicine] – it turns out all this fine talk of using genetics to determine health risk and whatnot is based on nothing more than clever marketing, since a lot of companies are making a lot of money off our belief in DNA. Truth is half the time we don’t even know what a gene is never mind what it actually does;  geneticists still can’t agree on how many genes there are in a human genome, as this article in Nature points out.

Along the same lines, I was most amused to read about something called the Super Seniors Study, research following a group of individuals in their 80’s, 90’s and 100’s who seem to be doing really well. Launched in 2002 and headed by Angela Brooks Wilson, a geneticist at the BC [British Columbia] Cancer Agency and SFU [Simon Fraser University] Chair of biomedical physiology and kinesiology, this longitudinal work is examining possible factors involved in healthy ageing.

Turns out genes had nothing to do with it, the title of the Globe and Mail article notwithstanding. (“Could the DNA of these super seniors hold the secret to healthy aging?” The answer, a resounding “no”, well hidden at the very [end], the part most people wouldn’t even get to.) All of these individuals who were racing about exercising and working part time and living the kind of life that makes one tired just reading about it all had the same “multiple (genetic) factors linked to a high probability of disease”. You know, the gene markers they tell us are “linked” to cancer, heart disease, etc., etc. But these super seniors had all those markers but none of the diseases, demonstrating (pretty strongly) that the so-called genetic links to disease are a load of bunkum. Which (she said modestly) I have been saying for more years than I care to remember. You’re welcome.

The fundamental error in this type of linear thinking is in allowing our metaphors (genes are the “blueprint” of life) and propensity towards social ideas of determinism to overtake common sense. Biological and physiological systems are not static; they respond to and change to life in its entirety, whether it’s diet and nutrition to toxic or traumatic insults. Immunity alters, endocrinology changes, – even how we think and feel affects the efficiency and effectiveness of physiology. Which explains why as we age we become increasingly dissimilar.

If you have the time, I encourage to read Susan’s comments in their entirety.

Scientific certainties

Following on with genetics, gene therapy dreams, and the complexity of biology, the June 19, 2018 Nature article by Cassandra Willyard (mentioned in Susan’s posting) highlights an aspect of scientific research not often mentioned in public,

One of the earliest attempts to estimate the number of genes in the human genome involved tipsy geneticists, a bar in Cold Spring Harbor, New York, and pure guesswork.

That was in 2000, when a draft human genome sequence was still in the works; geneticists were running a sweepstake on how many genes humans have, and wagers ranged from tens of thousands to hundreds of thousands. Almost two decades later, scientists armed with real data still can’t agree on the number — a knowledge gap that they say hampers efforts to spot disease-related mutations.

In 2000, with the genomics community abuzz over the question of how many human genes would be found, Ewan Birney launched the GeneSweep contest. Birney, now co-director of the European Bioinformatics Institute (EBI) in Hinxton, UK, took the first bets at a bar during an annual genetics meeting, and the contest eventually attracted more than 1,000 entries and a US$3,000 jackpot. Bets on the number of genes ranged from more than 312,000 to just under 26,000, with an average of around 40,000. These days, the span of estimates has shrunk — with most now between 19,000 and 22,000 — but there is still disagreement (See ‘Gene Tally’).

… the inconsistencies in the number of genes from database to database are problematic for researchers, Pruitt says. “People want one answer,” she [Kim Pruitt, a genome researcher at the US National Center for Biotechnology Information {NCB}] in Bethesda, Maryland] adds, “but biology is complex.”

I wanted to note that scientists do make guesses and not just with genetics. For example, Gina Mallet’s 2005 book ‘Last Chance to Eat: The Fate of Taste in a Fast Food World’ recounts the story of how good and bad levels of cholesterol were established—the experts made some guesses based on their experience. That said, Willyard’s article details the continuing effort to nail down the number of genes almost 20 years after the human genome project was completed and delves into the problems the scientists have uncovered.

Final comments

In addition to opaque processes with developers/entrepreneurs wanting to maintain their secrets for competitive advantages and in addition to our own poor understanding of the human body (how many genes are there anyway?), there are same major gaps (reflected in AI) in our understanding of various diseases. Angela Lashbrook’s August 16, 2018 article for The Atlantic highlights some issues with skin cancer and shade of your skin (Note: Links have been removed),

… While fair-skinned people are at the highest risk for contracting skin cancer, the mortality rate for African Americans is considerably higher: Their five-year survival rate is 73 percent, compared with 90 percent for white Americans, according to the American Academy of Dermatology.

As the rates of melanoma for all Americans continue a 30-year climb, dermatologists have begun exploring new technologies to try to reverse this deadly trend—including artificial intelligence. There’s been a growing hope in the field that using machine-learning algorithms to diagnose skin cancers and other skin issues could make for more efficient doctor visits and increased, reliable diagnoses. The earliest results are promising—but also potentially dangerous for darker-skinned patients.

… Avery Smith, … a software engineer in Baltimore, Maryland, co-authored a paper in JAMA [Journal of the American Medical Association] Dermatology that warns of the potential racial disparities that could come from relying on machine learning for skin-cancer screenings. Smith’s co-author, Adewole Adamson of the University of Texas at Austin, has conducted multiple studies on demographic imbalances in dermatology. “African Americans have the highest mortality rate [for skin cancer], and doctors aren’t trained on that particular skin type,” Smith told me over the phone. “When I came across the machine-learning software, one of the first things I thought was how it will perform on black people.”

Recently, a study that tested machine-learning software in dermatology, conducted by a group of researchers primarily out of Germany, found that “deep-learning convolutional neural networks,” or CNN, detected potentially cancerous skin lesions better than the 58 dermatologists included in the study group. The data used for the study come from the International Skin Imaging Collaboration, or ISIC, an open-source repository of skin images to be used by machine-learning algorithms. Given the rise in melanoma cases in the United States, a machine-learning algorithm that assists dermatologists in diagnosing skin cancer earlier could conceivably save thousands of lives each year.

… Chief among the prohibitive issues, according to Smith and Adamson, is that the data the CNN relies on come from primarily fair-skinned populations in the United States, Australia, and Europe. If the algorithm is basing most of its knowledge on how skin lesions appear on fair skin, then theoretically, lesions on patients of color are less likely to be diagnosed. “If you don’t teach the algorithm with a diverse set of images, then that algorithm won’t work out in the public that is diverse,” says Adamson. “So there’s risk, then, for people with skin of color to fall through the cracks.”

As Adamson and Smith’s paper points out, racial disparities in artificial intelligence and machine learning are not a new issue. Algorithms have mistaken images of black people for gorillas, misunderstood Asians to be blinking when they weren’t, and “judged” only white people to be attractive. An even more dangerous issue, according to the paper, is that decades of clinical research have focused primarily on people with light skin, leaving out marginalized communities whose symptoms may present differently.

The reasons for this exclusion are complex. According to Andrew Alexis, a dermatologist at Mount Sinai, in New York City, and the director of the Skin of Color Center, compounding factors include a lack of medical professionals from marginalized communities, inadequate information about those communities, and socioeconomic barriers to participating in research. “In the absence of a diverse study population that reflects that of the U.S. population, potential safety or efficacy considerations could be missed,” he says.

Adamson agrees, elaborating that with inadequate data, machine learning could misdiagnose people of color with nonexistent skin cancers—or miss them entirely. But he understands why the field of dermatology would surge ahead without demographically complete data. “Part of the problem is that people are in such a rush. This happens with any new tech, whether it’s a new drug or test. Folks see how it can be useful and they go full steam ahead without thinking of potential clinical consequences. …

Improving machine-learning algorithms is far from the only method to ensure that people with darker skin tones are protected against the sun and receive diagnoses earlier, when many cancers are more survivable. According to the Skin Cancer Foundation, 63 percent of African Americans don’t wear sunscreen; both they and many dermatologists are more likely to delay diagnosis and treatment because of the belief that dark skin is adequate protection from the sun’s harmful rays. And due to racial disparities in access to health care in America, African Americans are less likely to get treatment in time.

Happy endings

I’ll add one thing to Price’s article, Susan’s posting, and Lashbrook’s article about the issues with AI , certainty, gene therapy, and medicine—the desire for a happy ending prefaced with an easy solution. If the easy solution isn’t possible accommodations will be made but that happy ending is a must. All disease will disappear and there will be peace on earth. (Nod to Susan Baxter and her many discussions with me about disease processes and happy endings.)

The solutions, for the most part, are seen as technological despite the mountain of evidence suggesting that technology reflects our own imperfect understanding of health and disease therefore providing what is at best an imperfect solution.

Also, we tend to underestimate just how complex humans are not only in terms of disease and health but also with regard to our skills, understanding, and, perhaps not often enough, our ability to respond appropriately in the moment.

There is much to celebrate in what has been accomplished: no more black death, no more smallpox, hip replacements, pacemakers, organ transplants, and much more. Yes, we should try to improve our medicine. But, maybe alongside the celebration we can welcome AI and other technologies with a lot less hype and a lot more skepticism.

Sunscreens: 2018 update

I don’t usually concern myself with SPF numbers on sunscreens as my primary focus has been on the inclusion of nanoscale metal particles (these are still considered safe). However, a recent conversation with a dental hygienist and coincidentally tripping across a June 19, 2018 posting on the blog shortly after the convo. has me reassessing my take on SPF numbers (Note: Links have been removed),

So, what’s the deal with SPF? A recent interview of Dr Steven Q Wang, M.D., chair of The Skin Cancer Foundation Photobiology Committee, finally will give us some clarity. Apparently, the SPF number, be it 15, 30, or 50, refers to the amount of UVB protection that that sunscreen provides. Rather than comparing the SPFs to each other, like we all do at the store, SPF is a reflection of the length of time it would take for the Sun’s UVB radiation to redden your skin (used exactly as directed), versus if you didn’t apply any sunscreen at all. In ideal situations (in lab settings), if you wore SPF 30, it would take 30 times longer for you to get a sunburn than if you didn’t wear any sunscreen.

What’s more, SPF 30 is not nearly half the strength of SPF 50. Rather, SPF 30 allows 3% of UVB rays to hit your skin, and SPF 50 allows about 2% of UVB rays to hit your skin. Now before you say that that is just one measly percent, it actually is much more. According to Dr Steven Q. Wang, SPF 30 allows around 1.5 times more UV radiation onto your skin than SPF 50. That’s an actual 150% difference [according to Wang’s article “… SPF 30 is allowing 50 percent more UV radiation onto your skin.”] in protection.

(author of the ‘eponymous’ blog) offers a good overview of the topic in a friendly, informative fashion albeit I found the ‘percentage’ to be a bit confusing. (S)he also provides a link to a previous posting about the ingredients in sunscreens (I do have one point of disagreement with regarding oxybenzone) as well as links to Dr. Steven Q. Wang’s May 24, 2018 Ask the Expert article about sunscreens and SPF numbers on skincancer.org. You can find the percentage under the ‘What Does the SPF Number Mean?’ subsection, in the second paragraph.

Ingredients: metallic nanoparticles and oxybenzone

The use of metallic nanoparticles  (usually zinc oxide and/or (titanium dioxide) in sunscreens was loathed by civil society groups, in particular Friends of the Earth (FOE) who campaigned relentlessly against their use in sunscreens. The nadir for FOE was in February 2012 when the Australian government published a survey showing that 13% of the respondents were not using any sunscreens due to their fear of nanoparticles. For those who don’t know, Australia has the highest rate of skin cancer in the world. (You can read about the debacle in my Feb. 9, 2012 posting.)

At the time, the only civil society group which supported the use of metallic nanoparticles in sunscreens was the Environmental Working Group (EWG).  After an examination of the research they, to their own surprise, came out in favour (grudgingly) of metallic nanoparticles. (The EWG were more concerned about the use of oxybenzone in sunscreens.)

Over time, the EWG’s perspective has been adopted by other groups to the point where sunscreens with metallic nanoparticles are commonplace in ‘natural’ or ‘organic’ sunscreens.

As for oxybenzones, in a May 23, 2018 posting about sunscreen ingredients notes this (Note: Links have been removed),

Oxybenzone – Chemical sunscreen, protects from UV damage. Oxybenzone belongs to the chemical family Benzophenone, which are persistent (difficult to get rid of), bioaccumulative (builds up in your body over time), and toxic, or PBT [or: Persistent, bioaccumulative and toxic substances (PBTs)]. They are a possible carcinogen (cancer-causing agent), endocrine disrupter; however, this is debatable. Also could cause developmental and reproductive toxicity, could cause organ system toxicity, as well as could cause irritation and potentially toxic to the environment.

It seems that the tide is turning against the use of oxybenzones (from a July 3, 2018 article by Adam Bluestein for Fast Company; Note: Links have been removed),

On July 3 [2018], Hawaii’s Governor, David Ig, will sign into law the first statewide ban on the sale of sunscreens containing chemicals that scientists say are damaging the Earth’s coral reefs. Passed by state legislators on May 1 [2018], the bill targets two chemicals, oxybenzone and octinoxate, which are found in thousands of sunscreens and other skincare products. Studies published over the past 10 years have found that these UV-filtering chemicals–called benzophenones–are highly toxic to juvenile corals and other marine life and contribute to the fatal bleaching of coral reefs (along with global warming and runoff pollutants from land). (A 2008 study by European researchers estimated that 4,000 to 6,000 tons of sunblock accumulates in coral reefs every year.) Also, though both substances are FDA-approved for use in sunscreens, the nonprofit Environmental Working Group notes numerous studies linking oxybenzone to hormone disruption and cell damage that may lead to skin cancer. In its 2018 annual sunscreen guide, the EWG found oxybenzone in two-thirds of the 650 products it reviewed.

The Hawaii ban won’t take effect until January 2021, but it’s already causing a wave of disruption that’s affecting sunscreen manufacturers, retailers, and the medical community.

For starters, several other municipalities have already or could soon join Hawaii’s effort. In May [2018], the Caribbean island of Bonaire announced a ban on chemicals sunscreens, and nonprofits such as the Sierra Club and Surfrider Foundation, along with dive industry and certain resort groups, are urging legislation to stop sunscreen pollution in California, Colorado, Florida, and the U.S. Virgin Islands. Marine nature reserves in Mexico already prohibit oxybenzone-containing sunscreens, and the U.S. National Park Service website for South Florida, Hawaii, U.S. Virgin Islands, and American Samoa recommends the use of “reef safe” sunscreens, which use natural mineral ingredients–zinc oxide or titanium oxide–to protect skin.

Makers of “eco,” “organic,” and “natural” sunscreens that already meet the new standards are seizing on the news from Hawaii to boost their visibility among the islands’ tourists–and to expand their footprint on the shelves of mainland retailers. This past spring, for example, Miami-based Raw Elements partnered with Hawaiian Airlines, Honolulu’s Waikiki Aquarium, the Aqua-Aston hotel group (Hawaii’s largest), and the Sheraton Maui Resort & Spa to get samples of its reef-safe zinc-oxide-based sunscreens to their guests. “These partnerships have had a tremendous impact raising awareness about this issue,” says founder and CEO Brian Guadagno, who notes that inquiries and sales have increased this year.

As Bluestein notes there are some concerns about this and other potential bans,

“Eliminating the use of sunscreen ingredients considered to be safe and effective by the FDA with a long history of use not only restricts consumer choice, but is also at odds with skin cancer prevention efforts […],” says Bayer, owner of the Coppertone brand, in a statement to Fast Company. Bayer disputes the validity of studies used to support the ban, which were published by scientists from U.S. National Oceanic & Atmospheric Administration, the nonprofit Haereticus Environmental Laboratory, Tel Aviv University, the University of Hawaii, and elsewhere. “Oxybenzone in sunscreen has not been scientifically proven to have an effect on the environment. We take this issue seriously and, along with the industry, have supported additional research to confirm that there is no effect.”

Johnson & Johnson, which markets Neutrogena sunscreens, is taking a similar stance, worrying that “the recent efforts in Hawaii to ban sunscreens that contain oxybenzone may actually adversely affect public health,” according to a company spokesperson. “Science shows that sunscreens are a key factor in preventing skin cancer, and our scientific assessment of the lab studies done to date in Hawaii show the methods were questionable and the data insufficient to draw factual conclusions about any impact on coral reefs.”

Terrified (and rightly so) about anything scaring people away from using sunblock, The American Academy of Dermatology, also opposes Hawaii’s ban. Suzanne M. Olbricht, president of the AADA, has issued a statement that the organization “is concerned that the public’s risk of developing skin cancer could increase due to potential new restrictions in Hawaii that impact access to sunscreens with ingredients necessary for broad-spectrum protection, as well as the potential stigma around sunscreen use that could develop as a result of these restrictions.”

The fact is that there are currently a large number of widely available reef-safe products on the market that provide “full spectrum” protection up to SPF50–meaning they protect against both UVB rays that cause sunburns as well as UVA radiation, which causes deeper skin damage. SPFs higher than 50 are largely a marketing gimmick, say advocates of chemical-free products: According to the Environmental Working Group, properly applied SPF 50 sunscreen blocks 98% of UVB rays; SPF 100 blocks 99%. And a sunscreen lotion’s SPF rating has little to do with its ability to shield skin from UVA rays.

I notice neither Bayer nor Johnson & Johnson nor the American Academy of Dermatology make mention of oxybenzone’s possible role as a hormone disruptor.

Given the importance that coral reefs have to the environment we all share, I’m inclined to support the oxybenzone ban based on that alone. Of course, it’s conceivable that metallic nanoparticles may also have a deleterious effect on coral reefs as their use increases. It’s to be hoped that’s not the case but if it is, then I’ll make my decisions accordingly and hope we have a viable alternative.

As for your sunscreen questions and needs, the Environment Working Group (EWG) has extensive information including a product guide on this page (scroll down to EWG’s Sunscreen Guide) and a discussion of ‘high’ SPF numbers I found useful for my decision-making.

Getting chipped

A January 23, 2018 article by John Converse Townsend for Fast Company highlights the author’s experience of ‘getting chipped’ in Wisconsin (US),

I have an RFID, or radio frequency ID, microchip implanted in my hand. Now with a wave, I can unlock doors, fire off texts, login to my computer, and even make credit card payments.

There are others like me: The majority of employees at the Wisconsin tech company Three Square Market (or 32M) have RFID implants, too. Last summer, with the help of Andy “Gonzo” Whitehead, a local body piercer with 17 years of experience, the company hosted a “chipping party” for employees who’d volunteered to test the technology in the workplace.

“We first presented the concept of being chipped to the employees, thinking we might get a few people interested,” CEO [Chief Executive Officer] Todd Westby, who has implants in both hands, told me. “Literally out of the box, we had 40 people out of close to 90 that were here that said, within 10 minutes, ‘I would like to be chipped.’”

Westby’s left hand can get him into the office, make phone calls, and stores his living will and drivers license information, while the chip in his right hand is using for testing new applications. (The CEO’s entire family is chipped, too.) Other employees said they have bitcoin wallets and photos stored on their devices.

The legendary Gonzo Whitehead was waiting for me when I arrived at Three Square Market HQ, located in quiet River Falls, 40 minutes east of Minneapolis. The minutes leading up to the big moment were a bit nervy, after seeing the size of the needle (it’s huge), but the experience was easier than I could have imagined. The RFID chip is the size of a grain of basmati rice, but the pain wasn’t so bad–comparable to a bee sting, and maybe less so. I experienced a bit of bruising afterward (no bleeding), and today the last remaining mark of trauma is a tiny, fading scar between my thumb and index finger. Unless you were looking for it, the chip resting under my skin is invisible.

Truth is, the applications for RFID implants are pretty cool. But right now, they’re also limited. Without a near-field communication (NFC) writer/reader, which powers on a “passive” RFID chip to write and read information to the device’s memory, an implant isn’t of much use. But that’s mostly a hardware issue. As NFC technology becomes available, which is increasingly everywhere thanks to Samsung Pay and Apple Pay and new contactless “tap-and-go” credit cards, the possibilities become limitless. [emphasis mine]

Health and privacy?

Townsend does cover a few possible downsides to the ‘limitless possibilities’ offered by RFID’s combined with NFC technology,

From a health perspective, the RFID implants are biologically safe–not so different from birth control implants [emphasis mine]. [US Food and Drug Administration] FDA-sanctioned for use in humans since 2004, the chips neither trigger metal detectors nor disrupt [magnetic resonance imaging] MRIs, and their glass casings hold up to pressure testing, whether that’s being dropped from a rooftop or being run over by a pickup truck.

The privacy side of things is a bit more complicated, but the undeniable reality is that privacy isn’t as prized as we’d like to think [emphasis mine]. It’s already a regular concession to convenience.

“Your information’s for sale every day,” McMullen [Patrick McMullen, president, Three Square Market] says. “Thirty-four billion avenues exist for your information to travel down every single day, whether you’re checking Facebook, checking out at the supermarket, driving your car . . . your information’s everywhere.

Townsend may not be fully up-to-date on the subject of birth control implants. I think ‘safeish’ might be a better description in light of this news of almost two years ago (from a March 1, 2016 news item on CBS [Columbia Broadcasting Service] News [online]), Note: Links have been removed,

[US] Federal health regulators plan to warn consumers more strongly about Essure, a contraceptive implant that has drawn thousands of complaints from women reporting chronic pain, bleeding and other health problems.

The Food and Drug Administration announced Monday it would add a boxed warning — its most serious type — to alert doctors and patients to problems reported with the nickel-titanium implant.

But the FDA stopped short of removing the device from the market, a step favored by many women who have petitioned the agency in the last year. Instead, the agency is requiring manufacturer Bayer to conduct studies of the device to further assess its risks in different groups of women.

The FDA is requiring Bayer to conduct a study of 2,000 patients comparing problems like unplanned pregnancy and pelvic pain between patients getting Essure and those receiving traditional “tube tying” surgery. Agency officials said they have reviewed more than 600 reports of women becoming pregnant after receiving Essure. Women are supposed to get a test after three months to make sure Essure is working appropriately, but the agency noted some women do not follow-up for the test.

FDA officials acknowledged the proposed study would take years to complete, but said Bayer would be expected to submit interim results by mid-2017.

According to a Sept. 25, 2017 article by Kerri O’Brien for WRIC.com, Bayer had suspended sales of their device in all countries except the US,

Bayer, the manufacturer of Essure, has announced it’s halting sales of Essure in all countries outside of the U.S. In a statement, Bayer told 8News it’s due to a lack of interest in the product outside of the U.S.

“Bayer made a commercial decision this Spring to discontinue the distribution of Essure® outside of the U.S. where there is not as much patient interest in permanent birth control,” the statement read.

The move also comes after the European Union suspended sales of the device. The suspension was prompted by the National Standards Authority of Ireland declining to renew Essure’s CE marketing. “CE,” according to the European Commission website signifies products sold in the EEA that has been assessed to meet “high safety, health, and environmental protection requirements.”

These excerpts are about the Essure birth control implant. Perhaps others are safer? That noted, it does seem that Townsend was a bit dismissive of safety concerns.

As for privacy, he does investigate further to discover this,

As technology evolves and becomes more sophisticated, the methods to break it also evolve and get more sophisticated, says D.C.-based privacy expert Michelle De Mooy. Even so, McMullen believes that our personal information is safer in our hand than in our wallets. He  says the smartphone you touch 2,500 times a day does 100 times more reporting of data than does an RFID implant, plus the chip can save you from pickpockets and avoid credit card skimmers altogether.

Well, the first sentence suggests some caution. As for De Mooy, there’s this from her profile page on the Center for Democracy and Technology website (Note: A link has been removed),

Michelle De Mooy is Director of the Privacy & Data Project at the Center for Democracy & Technology. She advocates for data privacy rights and protections in legislation and regulation, works closely with industry and other stakeholders to investigate good data practices and controls, as well as identifying and researching emerging technology that impacts personal privacy. She leads CDT’s health privacy work, chairing the Health Privacy Working Group and focusing on the intersection between individual privacy, health information and technology. Michelle’s current research is focused on ethical and privacy-aware internal research and development in wearables, the application of data analytics to health information found on non-traditional platforms, like social media, and the growing market for genetic data. She has testified before Congress on health policy, spoken about native advertising at the Federal Trade Commission, and written about employee wellness programs for US News & World Report’s “Policy Dose” blog. Michelle is a frequent media contributor, appearing in the New York Times, the Guardian, the Wall Street Journal, Vice, and the Los Angeles Times, as well as on The Today Show, Voice of America, and Government Matters TV programs.

Ethics anyone?

Townsend does raise some ethical issues (Note: A link has been removed),

… Word from CEO Todd Westby is that parents in Wisconsin have been asking whether (and when) they can have their children implanted with GPS-enabled devices (which, incidentally, is the subject of the “Arkangel” episode in the new season of Black Mirror [US television programme]). But that, of course, raises ethical questions: What if a kid refused to be chipped? What if they never knew?

Final comments on implanted RFID chips and bodyhacking

It doesn’t seem that implantable chips have changed much since I first wrote about them in a May 27, 2010 posting titled: Researcher infects self with virus.  In that instance, Dr Mark Gasson, a researcher at the University of Reading. introduced a virus into a computer chip implanted in his body.

Of course since 2010, there are additional implantable items such as computer chips and more making their way into our bodies and it doesn’t seem to be much public discussion (other than in popular culture) about the implications.

Presumably, there are policy makers tracking these developments. I have to wonder if the technology gurus will continue to tout these technologies as already here or having made such inroads that we (the public) are presented with a fait accompli with the policy makers following behind.

New iron oxide nanoparticle as an MRI (magnetic resonance imaging) contrast agent

This high-resolution transmission electron micrograph of particles made by the research team shows the particles’ highly uniform size and shape. These are iron oxide particles just 3 nanometers across, coated with a zwitterion layer. Their small size means they can easily be cleared through the kidneys after injection. Courtesy of the researchers

A Feb. 14, 2017 news item on ScienceDaily announces a new MRI (magnetic resonance imaging) contrast agent,

A new, specially coated iron oxide nanoparticle developed by a team at MIT [Massachusetts Institute of Technology] and elsewhere could provide an alternative to conventional gadolinium-based contrast agents used for magnetic resonance imaging (MRI) procedures. In rare cases, the currently used gadolinium agents have been found to produce adverse effects in patients with impaired kidney function.

A Feb. 14, 2017 MIT news release (also on EurekAlert), which originated the news item, provides more technical detail,

 

The advent of MRI technology, which is used to observe details of specific organs or blood vessels, has been an enormous boon to medical diagnostics over the last few decades. About a third of the 60 million MRI procedures done annually worldwide use contrast-enhancing agents, mostly containing the element gadolinium. While these contrast agents have mostly proven safe over many years of use, some rare but significant side effects have shown up in a very small subset of patients. There may soon be a safer substitute thanks to this new research.

In place of gadolinium-based contrast agents, the researchers have found that they can produce similar MRI contrast with tiny nanoparticles of iron oxide that have been treated with a zwitterion coating. (Zwitterions are molecules that have areas of both positive and negative electrical charges, which cancel out to make them neutral overall.) The findings are being published this week in the Proceedings of the National Academy of Sciences, in a paper by Moungi Bawendi, the Lester Wolfe Professor of Chemistry at MIT; He Wei, an MIT postdoc; Oliver Bruns, an MIT research scientist; Michael Kaul at the University Medical Center Hamburg-Eppendorf in Germany; and 15 others.

Contrast agents, injected into the patient during an MRI procedure and designed to be quickly cleared from the body by the kidneys afterwards, are needed to make fine details of organ structures, blood vessels, and other specific tissues clearly visible in the images. Some agents produce dark areas in the resulting image, while others produce light areas. The primary agents for producing light areas contain gadolinium.

Iron oxide particles have been largely used as negative (dark) contrast agents, but radiologists vastly prefer positive (light) contrast agents such as gadolinium-based agents, as negative contrast can sometimes be difficult to distinguish from certain imaging artifacts and internal bleeding. But while the gadolinium-based agents have become the standard, evidence shows that in some very rare cases they can lead to an untreatable condition called nephrogenic systemic fibrosis, which can be fatal. In addition, evidence now shows that the gadolinium can build up in the brain, and although no effects of this buildup have yet been demonstrated, the FDA is investigating it for potential harm.

“Over the last decade, more and more side effects have come to light” from the gadolinium agents, Bruns says, so that led the research team to search for alternatives. “None of these issues exist for iron oxide,” at least none that have yet been detected, he says.

The key new finding by this team was to combine two existing techniques: making very tiny particles of iron oxide, and attaching certain molecules (called surface ligands) to the outsides of these particles to optimize their characteristics. The iron oxide inorganic core is small enough to produce a pronounced positive contrast in MRI, and the zwitterionic surface ligand, which was recently developed by Wei and coworkers in the Bawendi research group, makes the iron oxide particles water-soluble, compact, and biocompatible.

The combination of a very tiny iron oxide core and an ultrathin ligand shell leads to a total hydrodynamic diameter of 4.7 nanometers, below the 5.5-nanometer renal clearance threshold. This means that the coated iron oxide should quickly clear through the kidneys and not accumulate. This renal clearance property is an important feature where the particles perform comparably to gadolinium-based contrast agents.

Now that initial tests have demonstrated the particles’ effectiveness as contrast agents, Wei and Bruns say the next step will be to do further toxicology testing to show the particles’ safety, and to continue to improve the characteristics of the material. “It’s not perfect. We have more work to do,” Bruns says. But because iron oxide has been used for so long and in so many ways, even as an iron supplement, any negative effects could likely be treated by well-established protocols, the researchers say. If all goes well, the team is considering setting up a startup company to bring the material to production.

For some patients who are currently excluded from getting MRIs because of potential side effects of gadolinium, the new agents “could allow those patients to be eligible again” for the procedure, Bruns says. And, if it does turn out that the accumulation of gadolinium in the brain has negative effects, an overall phase-out of gadolinium for such uses could be needed. “If that turned out to be the case, this could potentially be a complete replacement,” he says.

Ralph Weissleder, a physician at Massachusetts General Hospital who was not involved in this work, says, “The work is of high interest, given the limitations of gadolinium-based contrast agents, which typically have short vascular half-lives and may be contraindicated in renally compromised patients.”

The research team included researchers in MIT’s chemistry, biological engineering, nuclear science and engineering, brain and cognitive sciences, and materials science and engineering departments and its program in Health Sciences and Technology; and at the University Medical Center Hamburg-Eppendorf; Brown University; and the Massachusetts General Hospital. It was supported by the MIT-Harvard NIH Center for Cancer Nanotechnology, the Army Research Office through MIT’s Institute for Soldier Nanotechnologies, the NIH-funded Laser Biomedical Research Center, the MIT Deshpande Center, and the European Union Seventh Framework Program.

Here’s a link to and a citation for the paper,

Exceedingly small iron oxide nanoparticles as positive MRI contrast agents by He Wei, Oliver T. Bruns, Michael G. Kaul, Eric C. Hansen, Mariya Barch, Agata Wiśniowsk, Ou Chen, Yue Chen, Nan Li, Satoshi Okada, Jose M. Cordero, Markus Heine, Christian T. Farrar, Daniel M. Montana, Gerhard Adam, Harald Ittrich, Alan Jasanoff, Peter Nielsen, and Moungi G. Bawendi. PNAS February 13, 2017 doi: 10.1073/pnas.1620145114 Published online before print February 13, 2017

This paper is behind a paywall.

AquAdvantage salmon (genetically modified) approved for consumption in Canada

This is an update of the AquAdvantage salmon story covered in my Dec. 4, 2015 post (scroll down about 40% of the way). At the time, the US Food and Drug Administration (FDA) had just given approval for consumption of the fish. There was speculation there would be a long hard fight over approval in Canada. This does not seem to have been the case, according to a May 10, 2016 news item announcing Health Canada’s on phys.org,

Canada’s health ministry on Thursday [May 19, 2016] approved a type of genetically modified salmon as safe to eat, making it the first transgenic animal destined for Canadian dinner tables.

This comes six months after US authorities gave the green light to sell the fish in American grocery stores.

The decisions by Health Canada and the US Food and Drug Administration follow two decades of controversy over the fish, which is an Atlantic salmon injected with genes from Pacific Chinook salmon and a fish known as the ocean pout to make it grow faster.

The resulting fish, called AquAdvantage Salmon, is made by AquaBounty Technologies in Massachusetts, and can reach adult size in 16 to 18 months instead of 30 months for normal Atlantic salmon.

A May 19, 2016 BIOTECanada news release on businesswire provides more detail about one of the salmon’s Canadian connections,

Canadian technology emanating from Memorial University developed the AquAdvantage salmon by introducing a growth hormone gene from Chinook salmon into the genome of Atlantic salmon. This results in a salmon which grows faster and reaches market size quicker and AquAdvantage salmon is identical to other farmed salmon. The AquAdvantage salmon also received US FDA approval in November 2015. With the growing world population, AquaBounty is one of many biotechnology companies offering safe and sustainable means to enhance the security and supply of food in the world. AquaBounty has improved the productivity of aquaculture through its use of biotechnology and modern breeding technics that have led to the development of AquAdvantage salmon.

“Importantly, today’s approval is a result of a four year science-based regulatory approval process which involved four federal government departments including Agriculture and AgriFood, Canada Food Inspection Agency, Environment and Climate Change, Fisheries and Oceans and Health which demonstrates the rigour and scope of science based regulatory approvals in Canada. Coupled with the report from the [US] National Academy of Sciences today’s [May 19, 2016] approval clearly demonstrates that genetic engineering of food is not only necessary but also extremely safe,” concluded Casey [Andrew Casey, President and CEO BIOTECanada].

There’s another connection, the salmon hatcheries are based in Prince Edward Island.

While BIOTECanada’s Andrew Casey is crowing about this approval, it should be noted that there was a losing court battle with British Columbia’s Living Oceans Society and Nova Scotia’s Ecology Action Centre both challenging the federal government’s approval. They may have lost *the* battle but, as the cliché goes, ‘the war is not over yet’. There’s an Issue about the lack of labeling and there’s always the  possibility that retailers and/or consumers may decide to boycott the fish.

As for BIOTECanada, there’s this description from the news release,

BIOTECanada is the national industry association with more than 230 members reflecting the diverse nature of Canada’s health, industrial and agricultural biotechnology sectors. In addition to providing significant health benefits for Canadians, the biotechnology industry has quickly become an essential part of the transformation of many traditional cornerstones of the Canadian economy including manufacturing, automotive, energy, aerospace and forestry industries. Biotechnology in all of its applications from health, agriculture and industrial is offering solutions for the collective population.

You can find the BIOTECanada website here.

Personally, I’m a bit ambivalent about it all. I understand the necessity for changing our food production processes but I do think more attention should be paid to consumers’ concerns and that organizations such as BIOTECanada could do a better job of communicating.

*’the’ added on Aug. 4, 2016.

Opioid addiction and nanotechnology in Pennsylvania, US

Combating a drug addiction ‘crisis’ with a nanotechnology-enabled solution is the main topic although the technology is being implemented for another problem first according to this May 4, 2016 article by John Luciew for pennlive.com (Note: Links have been removed),

Treating pain is a constant in medicine. It’s part of the human condition, known as the “fifth vital sign” among physicians. Effectively treating pain will continue to play a central role in medicine, despite the societal shock waves brought on by the rapid rise in opioid addiction across America.

The fallout from our nation’s opioid addiction crisis is roiling the medical and pharmaceutical industries, where regulatory action is rapidly reining in opioid painkiller prescriptions with new guidelines and stricter controls.

By harnessing nanotechnology and small-particles physics, Iroko Pharmaceuticals is developing a new class of low-dose prescription painkillers. Company executives say their line of nonsteroidal anti-inflammatory drugs could be the opioid alternative that the medical community has been looking for amid America’s addiction crisis.

The pharmaceutical company is Pennsylvania-based (US) and it isn’t tackling the ‘opioid addiction crisis’ yet. First, there’s this,

Its new line of prescription painkillers are predicated upon a highly patented process of pulverizing drug molecules so they are up to 100 times smaller, which markedly increases their pain-killing effectiveness at dramatically lower doses.

Right now, Iroko is focusing this nanotechnology on creating a full line of low-dose prescription painkillers based upon the class of drugs known as nonsteroidal anti-inflammatories, or NSAIDs. There are six NSAID molecules, the most common being Ibuprofen. Iroko is planning nanotechnology technology versions for all six NSAID molecules, three of which have already received approval from the Food and Drug Administration.

Luciew has done some homework on the technology,

“We solved a chemistry problem by using physics,” explained Iroko Chairman Osagie Imasogie, who founded the company [Iroko Pharmaceuticals] in 2007.

Yet, the company that actually solved the physics problem was iCeutica, founded in Australia and now based in King of Prussia, Pa.

iCeutica owns the patented SoluMatrix fine particle process that pulverizes drug molecules into nano-sized particles, enabling low doses of a drug to be better absorbed by the body, thus providing faster and far more effective pain relief.

Of course, the practice of crushing and grinding drug powders is as old as the pharmacist’s mortar and pestle. But there’s never been a way of pulverizing a drug molecule into nano particles that was scalable for industrial production — not until iCeutica created its SoluMatrix process, that is.

iCeutica provides a description of the technology on its SoluMatrix webpage,

iCeutica’s proprietary SoluMatrix™ Fine Particle Technology fuels new product development and solves problems of bioavailability, variability, side effects and delivery of marketed or development-stage pharmaceuticals.

The SoluMatrix technology is a scaleable and cost-effective manufacturing process that can produce submicron-sized drug particles that are 10 to 200 times smaller than conventional drug particles. The particles generated using this technology, which both grinds the drug particles into a superfine powder and protects those submicron particles from subsequent agglomeration (or clumping together into big particles), comprise a single unit operation and can be manufactured into tablets, capsules and other dosage forms without further processing.

The SoluMatrix technology improves the performance of pharmaceuticals by dramatically changing how the drug dissolves and is absorbed. By making submicron-sized particles of a drug, it is possible to:

Unfortunately there aren’t more details. I’m somewhat puzzled  by the submicron measurement why not state the size using the term nanometre?

Getting back  to Iroko, Imasogie, impressed with the SoluMatrix technology, has made a major investment in iCeutica and is chair of iCeutica’s board. His homebase company, Iroko holds exclusive global rights to SoluMatrix.

Luciew’s article describes the current situation in the NSAID market,

Iroko officials acknowledge that NSAID painkillers carry their own health risks, including the potential for stomach ulcers, kidney problems and cardio-vascular ailments, up to and including stroke and heart attack. The fears associated with NSAIDs peaked a decade ago with the Vioxx case, a popular prescription NSAID that was eventually taken off the market due to associated cardiac and other risks.

The latest FDA guidelines for NSAID use calls for the lowest effective dose, which precisely describes the nanotechnology-driven low-dose NSAID drugs Iroko is rolling out. What is more, due to the ongoing opioid crisis, both the FDA and the Centers for Disease Control are heavily emphasizing non-opioid alternatives for pain relief, further opening to door for Iroko’s pain products.

That said about the issues with NSAIDs, Luciew outlines Iroko’s current offerings and explains what makes this technology so attractive,

According to Imasogie, Iroko’s line of low-dose, nanotechnology NSAIDs fits both sets of regulatory safety criteria. The new drugs are the lowest effective dose for NSAIDs, and are a viable pain-killing alternative to opioids, especially when it comes to treating osteoarthritis and other moderate pain.

“No one is going to give an NSAID if you have cancer,” Imasogie says. “But for chronic low back pain, yes.”

Three of Iroko’s six low-dose NSAID offerings have already received FDA approval and are on the market:

  • Zorvolex (diclofenac), approved in October 2013 for the management of mild to moderate acute pain in adults and in August 2014 for the management of osteoarthritis pain.
  • Tivorbex, approved in February 2014 for treatment of mild to moderate acute pain in adults.
  • Vivlodex, approved in October 2015 as another option for treatment of osteoarthritis pain. Three more of Iroko’s low-dose NSAIDs are awaiting approval.

These nano drugs are effective at doses of 35 to 40 milligrams to as low as 10 milligrams, the company says. That’s compared to other NSAID doses that start at 200 milligrams. As a result, Iroko’s low-dose NSAID drugs are being marketed as providing a prescription alternative to opioids at the precise moment everyone from the White House to the white-coat-clad family physician is searching for one.

If you the have time and interest, I encourage you to read Luciew’s article in its entirety. He covers more market issues and includes an enbedded video in his piece.

One last note about Iroko Pharmaceuticals, the company is named after a tree found on the African continent and executives of the company have hinted they are experimenting with SoluMatrix to make low-dose opioids available in the future.

While I have my doubts about the opioid addiction ‘crisis’, I do believe that lower, more effective doses of painkillers, regardless of their drug class, can only benefit patients.

Cornell University researchers breach blood-brain barrier

There are other teams working on ways to breach the blood-brain barrier (my March 26, 2015 post highlights work from a team at the University of Montréal) but this team from  Cornell is working with a drug that has already been approved by the US Food and Drug Administration (FDA) according to an April 8, 2016 news item on ScienceDaily,

Cornell researchers have discovered a way to penetrate the blood brain barrier (BBB) that may soon permit delivery of drugs directly into the brain to treat disorders such as Alzheimer’s disease and chemotherapy-resistant cancers.

The BBB is a layer of endothelial cells that selectively allow entry of molecules needed for brain function, such as amino acids, oxygen, glucose and water, while keeping others out.

Cornell researchers report that an FDA-approved drug called Lexiscan activates receptors — called adenosine receptors — that are expressed on these BBB cells.

An April 4, 2016 Cornell University news release by Krishna Ramanujan, which originated the news item, expands on the theme,

“We can open the BBB for a brief window of time, long enough to deliver therapies to the brain, but not too long so as to harm the brain. We hope in the future, this will be used to treat many types of neurological disorders,” said Margaret Bynoe, associate professor in the Department of Microbiology and Immunology in Cornell’s College of Veterinary Medicine. …

The researchers were able to deliver chemotherapy drugs into the brains of mice, as well as large molecules, like an antibody that binds to Alzheimer’s disease plaques, according to the paper.

To test whether this drug delivery system has application to the human BBB, the lab engineered a BBB model using human primary brain endothelial cells. They observed that Lexiscan opened the engineered BBB in a manner similar to its actions in mice.

Bynoe and Kim discovered that a protein called P-glycoprotein is highly expressed on brain endothelial cells and blocks the entry of most drugs delivered to the brain. Lexiscan acts on one of the adenosine receptors expressed on BBB endothelial cells specifically activating them. They showed that Lexiscan down-regulates P-glycoprotein expression and function on the BBB endothelial cells. It acts like a switch that can be turned on and off in a time dependent manner, which provides a measure of safety for the patient.

“We demonstrated that down-modulation of P-glycoprotein function coincides exquisitely with chemotherapeutic drug accumulation” in the brains of mice and across an engineered BBB using human endothelial cells, Bynoe said. “The amount of chemotherapeutic drugs that accumulated in the brain was significant.”

In addition to P-glycoprotein’s role in inhibiting foreign substances from penetrating the BBB, the protein is also expressed by many different types of cancers and makes these cancers resistant to chemotherapy.

“This finding has significant implications beyond modulation of the BBB,” Bynoe said. “It suggests that in the future, we may be able to modulate adenosine receptors to regulate P-glycoprotein in the treatment of cancer cells resistant to chemotherapy.”

Because Lexiscan is an FDA-approved drug, ”the potential for a breakthrough in drug delivery systems for diseases such as Alzheimer’s disease, Parkinson’s disease, autism, brain tumors and chemotherapy-resistant cancers is not far off,” Bynoe said.

Another advantage is that these molecules (adenosine receptors  and P-glycoprotein are naturally expressed in mammals. “We don’t have to knock out a gene or insert one for a therapy to work,” Bynoe said.

The study was funded by the National Institutes of Health and the Kwanjung Educational Foundation.

Here’s a link to and a citation for the paper,

A2A adenosine receptor modulates drug efflux transporter P-glycoprotein at the blood-brain barrier by Do-Geun Kim and Margaret S. Bynoe. J Clin Invest. doi:10.1172/JCI76207 First published April 4, 2016

Copyright © 2016, The American Society for Clinical Investigation.

This paper appears to be open access.